What about Swift?

We are seeing a resurgence in programming language design. Watch out C, Java, a change may be in the wind.

Swift seems like it has overcome some of the inadequacies of C-like languages. Swift is an elegant language, and has enhanced readability due to a much simplified syntax. Why is it more readable?

  1. Semicolons be gone! – they aren’t needed unless there are multiple statements on the same line.
  2. Forced use of block statements – C-like languages allow either a single statement, or a block enclosed in { } after a control structure. Swift does not. Apart from switch statements, all other control structures require braces and no single line statements are allowed. Vamoose dangling else!
  3. Swift does not allow the use of the assignment (=) operator in if statements, where there should have been a == (unlike C and Objective-C).

Swift also does neat things with for loops, by way of a for-in loop, and ranges.

for i in 1...5 {
    println(i)
}

Here is an example for calculating a Fibonacci number:

var fibN0 = 1
var fibN1 = 1
for i in 3...50
{
    var sum = fibN0 + fibN1
    fibN1 = fibN0
    fibN0 = sum
}

The weird thing about Swift may be its variables. Not what you would normally expect. In Swift you can create variables or constants, and they are implicitly typed (but the type can also be specified). Swift also allows the use of tuples, which means groups of values can be treated as a single unit.

And yippy! the goto statement has been banished.

The downside? Console input seems to be *lacking*. I get that some of these languages aren’t designed for STDIN, but the language should provide an easy way of doing it. I mean they do provide print and println for output?

C++ as an introductory programming language?

People who don’t like C often cite a move to C++ as being a better option. In reality it’s more like jumping out of the frying pan, into well, a larger frying pan. I’m not suggesting C++ is a bad language, because it’s not. What it is though is object-oriented. When teaching the concepts of introductory programming, there is no real place for OO. OO is an approach to software – it’s not the be all and end all. OO has been touted as a natural way to express concepts – to “think like an object”. But I don’t think it’s that natural. When I bake, I don’t think about the ingredients and implements I’m using as objects, because whilst in an abstract sense they are, in reality they are ingredients and tools. I don’t look at the wooden spoon and want to encapsulate functionality within it.

Now I have taught C as an introductory language for over a decade, and there are definite issues with it from a pedagogical viewpoint:

  • C’s compact and flexible coding syntax leads to clever, but often un-maintainable, code (at least by novices).
  • C’s I/O is marginal in terms of use and clarity, a significant drawback.
  • String and arrays are not easy to implement.
  • Restriction to pass by value parameters. In C, one must introduce pointers prematurely.

Now C++ does have advantages over C, dealing with some of these issues. It offers better I/O than C. It offers pass by reference and by value, pointers, exceptions and classes. The biggest problem with C++ is its complexity. The volume of its syntax and multiple uses of keywords easily lead to confusion. Sure, you could teach a subset of C++, maybe without the OO, but then you are effectively teaching C. As imperative languages touting the basic tenets of programming, the languages are EXTREMELY similar. Consider the two pieces of code below to calculate Fibonacci numbers: one in C, and the other in C++. The I/O statements are a little different, but otherwise in a procedural sense, they are both the same.

fibCvsCPP
Fibonacci in C versus C++

Pedagogically I would steer away from both languages, and consider going backwards to Pascal or Fortran, or forwards to Swift, or maybe Julia. The rationale for an introductory programming course is to teach (i) problem solving by means of programming, and (ii) the fundamental concepts of programming languages. Coverage dedicated to OO concepts takes time away from the fundamental topics of programming: typing, control structures, modularity, AND some of the more maligned ideas: style, usability and testing, which are way more important than OO.

 

Organic software design

In 1975 Ted Herman, (of Tinimen Corporation) wrote a 1-page paper relating to organic program design [1]. At the time, there were many advocates of the notion that the activities of making software should be separated, i.e. design and coding. Herman’s “polemic foray” into programming methodology questioned this separation. Aptly put by Henry F. Ledgard (in Programming Proverbs, 1975), “Think first, program later”. Forty years later this “separation” is still one of the most touted approaches to making software.

However anyone who has actually constructed software will realize that this approach is somewhat of a fallacy. It reduces the art of coding to a “thoughtless process” (in the words of Herman). There is no guarantee that the design won’t contain flaws. This is not dissimilar to an architect designing a building, and having little regard for the builders who must actually construct the edifice. Building software is an organic process, whereby the design is not fixed before coding. In reality, real software is often created through coding interspersed with fragments of design. Not much different to how things are built in woodworking – I will have a rough design of what I want to build, but the design may change along the way, maybe due to interaction, or maybe due to roadblocks encountered along the way.

Designing a program for me is more of an experimental process than one steeped in a rigid notion of design-code-test-repeat. In many ways organic design is more agile, not inflexible, and not design heavy. This is extremely important when re-engineering a piece of code, as an overall approach to the refactoring is important, however how the code reacts is never known until the refactoring begins.

[1] Herman, T., “Organic program design: A programming method which mixes design and coding is espoused”, Proc. ACM Ann. Conf., p.356 (1975).

The most evil programming language

After a talk on legacy software recently, a high-school student asked me what I thought the most evil programming language is. I have programmed in many languages over the years – is there one that stands out as being evil? or at least slightly wicked? Some people would cite Cobol as being evil, but that is only because it is so different from the norm experienced today. I would actually choose C. C almost has a dr Jekyll and mr Hyde nuance about it. It excels at system programming, and is fast. But it will turn on you in a heart-beat and bleed memory. C will do just about anything you want it to, however sometimes it lacks elegance. That, and it’s a hard language for novices to learn to program in.

Here are Niklaus Wirth’s views on C¹:

“The widespread use of C effectively, if unintentionally, sabotaged the programming community’s attempt to raise the level of software engineering. This was true because C offers abstractions which it does not in fact support: arrays that remain without index checking, data types without a consistency check, pointers that are merely addresses where addition and subtraction are applicable. One might have classified C as being somewhere on a scale between misleading and (possibly) dangerous.”

“The trouble was that C’s rules could easily be broken, exactly what many programmers valued. C made it possible for programmers to manage access to all of a computer’s idiosyncrasies, even to those items that a high-level language would properly hide. C provided freedom, whereas high-level languages were considered straitjackets, enforcing unwanted discipline.”

It is true, C lets you do things that other programming languages never would, and therefore may contribute to a false sense of security. Create a simple array of characters in C:

char word[20];

Yet use scanf to read in the string, and you risk being able to store more than just 19 characters in word. See, from a pedagogical perspective C is already confusing. In other languages specifying 20 characters means 20 characters, and whatever terminates the “string” is done so transparently. Not so in C – here it is 19 characters + 1 invisible end-of-string character. But with scanf, I could read in 30 characters, and likely “store” them quite happily. That shouldn’t be allowed to happen.

¹Wirth, N., “A brief history of software engineering”, IEEE Annals of the History of Computing, pp.32-39 (2008)

 

 

Internet of things? It’s probably foolhardy.

The internet of things. That’s where a physical object that is somehow “smart”, communicates with other objects. For example, a fridge that reminds its owner to pick up milk via their smartphones as they pass a supermarket, or a self-governing HVAC system that tracks weather via a weather station and modifies climate control accordingly. It may all seem like a neat idea, I mean let’s put a chip in everything. We could make Lego kits that build themselves, or I could put an embedded system in my apricot tree to notify me when one of those thieving squirrels is raiding the tree.  But the reality is that we don’t need this level of pervasiveness in our lives. Yeah, sure it would be neat if my fridge warned me when something was nearing its expiry date, but in order for this to work I have to scan things as I put them in, or have the fridge keep track of it but adding yet more complex technology to it. But this also means that fruit and vegetables would have to have a bar code as well. Will the fridge have some sort of olfactory sensors to tell when food is going off? Look, at the end of the day, a fridge keeps food cold. It doesn’t need to do anything else. More intelligence means more complexity, and a greater likelihood of problems occuring.

Do we need this many devices connected to the Internet? Do I need my washing machine connected? My toaster? Do I need to be providing the manufacturer with a whole slew of data on my clothes washing habits, or what bread I toast? The simple answer is NO. Because it’s all about “Big Data” – companies collecting data about your consumer habits. If you believe its designed to actually help you, you’re kidding yourself.  The more things that get connected, the bigger the security risk. Anything has the potential to be hacked.

internetOFthings
geek & poke, http://geek-and-poke.com/

 

I like programming, and computers do some cool things for us. But they don’t need to run our lives the way they do. We spend too much time in front of the machine, and not enough time actually thinking. There was a time when I thought about putting in a smart thermostat, but when Nest had that issue with the “hand-waving” algorithm in their smoke detectors, and just remotely switched the algorithm off, I thought otherwise. What if they could remotely switch off my furnace? It is all too big-brotherish, so I’ll stick with simplicity for the moment. My refrigerator isn’t going to be any less efficient because it is “dumb”. In fact knowing how badly tested some of this software is, I’m okay not having it in my house.

Every once in a while we should realize that there is a world around us, go outside, and experience it. Not automate it.

P.S. As food for thought, the average modern refrigerator has a lifespan of 10-18 years, due largely to lack-of-quality, and the ubiquitous “built-in obsolescence”. There is at least one fridge in the US which is 87 years old and still running. So modern refrigerators are more efficient, but I need to buy more of them over time because they don’t last as long. Go figure.