Why physical usability really matters

We teach computer science, but usually relegate the human-computer interaction and usability aspects of it to upper year classes, where its relevance may be “too little, too late”. Software is a means to an end, whether it makes a washing machine function, control a self-driving car, or run a social media app. Software should possess seven qualities: user experience, availability, performance, scalability, adaptability, security, and economy. But, if you get the user experience wrong, then none of the rest matter. Unfortunately software exhibiting bad user experiences abounds, from terrible websites, to lacklustre apps, and awful appliance interfaces. Part of the problem is that software designers rely on advice from people who *aren’t the end-users* – sometimes they have focus-groups, but it doesn’t seem like often enough.

A case in point is Metrolinx Preto readers on various transit systems. I don’t know who chooses the device interfaces, but I imagine it is the transit systems themselves. On some systems, the readers are designed in such a manner that once tapped, the user is provided with confirmation via a sound, and a brief verification of the balance on the card. This is useful, because otherwise the user is forced to login to the Presto website to find the balance on the card. This is done on a two-line screen. The TTC on the other hand provides a huge screen area, and the only feedback they provide is success or failure – no information on the current balance. There are supposedly 10,000+ readers out there, and I would imagine transparency would be a key factor, making all the readers function in the same way from a user experience point-of-view. Consider the TTC Presto reader found in the buses.

The area associated with tapping the Presto card represents roughly 12% of the readers, front surface area. The feedback screen on the other hand takes up 30% of the real estate. Now many people using the reader for the first time will wrongly try and tap the screen, because it is the first thing the eyes are drawn too – not the small area below it. There are two problems, one is that the actual human-machine interface is very small, and the second is that the large screen basically mimics the visual instructions  already on the tap area. A better interface would have concentrated more on the interface area, and less on a huge feedback area (which serves no other purpose really). These other Presto card readers do the job *way* better from the perspective of a person interacting with a reader. Feedback is provided by means of the 2-line LED display, and its almost impossible to get the card-tapping wrong.

Considering the market, you would also think that Presto would have a mobile app, not force users into using a mobile web site. It just makes sense.


Software – a step back in time

It has been 60 years since the we crawled out of the primordial muck of assembly coding and started to program with the aid of Fortran. Fortran was our friend in the early days of coding, as much as Cobol, and many other languages that were to evolve. However times were different, programs were small, and usually geared towards performing a task faster than the human mind could. Ironically, our friends Fortran and Cobol are still with us, often to such an extent that it is impossible now to extricate ourselves from them.  The problem is we are still doing things the same way as we were 30 years ago, and software has ballooned to gargantuan proportions. It’s not only size, it’s also complexity. Writing a program 1000 lines long to perform some mathematical calculation, is vastly different to designing a piece of software a 50 million lines long to run a car. The more complex the software, the more chances that it conceals hidden bugs, and the greater the probability that no one really understands how it works. It’s not the languages that are the problem, it is the methodology we use to design software.

Learning to program is not fundamentally difficult. Note that I emphasize the word program, which encompasses taking a problem, solving the problem, designing an algorithm, *coding* the algorithm in a language, and testing the program to make sure it works. This is of course true if the problem can be solved. Can I write a program for cars to detect STOP signs using a built-in camera? Probably. Can it be tested? Probably? Can you create autonomous vehicles? Yes. Can you guarantee they will operate effectively in all environments? Unlikely. What happens to a self-driving car in a torrential downpour? Or a snowstorm? Autonomous trains/subways work well because they run on tracks in a controlled space. Aerospace software works well because avionics may be taken more seriously. Cars? Well who knows. A case in point – a Boeing 787 has 6.5 million lines behind its avionics and online support systems, the software in a modern vehicle? – 100 million LOC. Something not quite right there…

Fundamentally I don’t think we really know how to build systems 100 million LOC in size. We definitely don’t know how to test them properly. We don’t teach anyone to build these huge systems. We teach methods of software development like waterfall and agile, which have benefits and limitations, but maybe weren’t designed to cope with these complex pieces of software. What we need is something more immersive, some hybrid model of software development that takes into account that software is a means to an end, relies heavily on usability, and may be complex.

Until we figure out how to create useful, reliable software, we should likely put the likes of things like AI back on the shelf. Better not to let the genie out of the bottle until we know how to properly handle it.

I highly recommend reading this article: A small group of programmers wants to change how we code—before catastrophe strikes.

Coding? Why kids don’t need it.

There has been a big hoopla in the past few years about learning to code early. The number of articles on the subject is immense. In fact some people even think two year olds should start to code. I think it is all likely getting a bit out of hand. Yeah, I guess coding is somewhat important, but I think two year olds should be doing other things – like playing for instance. People seem to forget that kids should be kids. Not coders. Some places can’t even get a math curriculum right. For years Ontario has used an  inquiry-based learning system for math in elementary schools. This goes by a number of different names : discovery learning, or constructivism, which focuses on things open-ended problem solving. Problem is it doesn’t really work. Kids can’t even remember the times tables. So if you can’t get the basics of math right, then likely you won’t get coding right either.

Why do we code? To solve problems right? Coding, however, is just a tool, like a hammer. We use a hammer to build things, but not until we have some sort of idea *what* we are building. To understand how to build something, we first have to understand the context in which it will be built, the environment, the materials, the design. You can’t teach coding in isolation. That’s almost like giving someone a hammer, a bucket of nails and a skid of lumber and saying “build something”. The benefits of teaching kids to code are supposedly numerous. from increased exploration and creativity, mastery of new skills, and new ways of thinking. Now while it may be fun to build small robots and write small programs to have them do things, it is far from the reality of real computer science. The most code written every year is still in Cobol, to maintain the legacy code base that underpins the world’s financial system (and other systems alike). A world far removed from learning to code in Alice.

We have been here once before, in the 1980s there was an abundance of literature exploring the realm of teaching children to program (using visual languages such as Logo). It didn’t draw masses of students into programming then, and adding coding classes to the curriculum in elementary school now won’t do it either. Some kids will enjoy coding, many likely won’t.  Steve Jobs apparently once said “Everyone should learn how to program a computer, because it teaches you how to think.” But it doesn’t. You can’t write programs, if you don’t know how to solve the problems the programs are meant to deal with. Nearly anyone can learn to write basic programs like Python and Julia, but if you don’t have a purpose, then what’s the point? It’s nice to have coding as a skill, but not at the expense of learning about the world around you. Far too many people are disassociated from the world around them. Sitting in front of a machine learning to code, may not be the best approach to broaden the minds of our youth. We could simply start by having kids in school do more problem solving tasks, from the wider world – build things with Lego, make paper airplanes, find ways of building a bridge across a creek, cooking. The tasks are practical, and foster a means of problem solving. With a bag of problem solving skills, they will be able to deal with lots of things better in life.

Learning to code is not some form of magic, and sometimes kids should just be allowed to be kids.


When things aren’t quite what they seem

Some algorithms I don’t get. I just read about an algorithm “Treepedia“, which measures the canopy cover in cities, using Google imagery – or actually it measures the “Green View Index (GVI). As their website describes, they don’t count the number of individual trees, but have created a “scaleable and universally applicable method by analyzing the amount of green perceived while walking down the street”. This gives Toronto a “Green View Index” of 19.5%. Nice. But look a little closer and they also state “The visualization maps street-level perception only, so your favorite parks aren’t included!” So, it focuses on street trees, which is interesting, but doesn’t represent the true picture. The green cover in an urban environment cannot be estimated from a simple series of street-view photographs.

Now that’s a problem. The algorithm clearly does *not* calculate the amount of land covered by trees.  That’s a problem because Toronto has a bunch of ravines, and numerous parks filled with trees. Official estimates put the actual tree canopy cover at somewhere between 26.6% and 28%. Vancouver is the opposite – Treepedia gives it a rating of 25.9%, whereas official documents show the amount closer to 18% (2013). That’s made up of canopy on private property (62%), parks (27%), and streets (11%).

So why didn’t they include parks, and forests? Because it’s not a trivial thing to do. Green spaces from aerial images in urban areas also include grassed areas, and gardens, shrubs etc. But the algorithm is not entirely to blame though, maybe the logic behind it? Look, representing canopy cover is important, but that’s not what this algorithm does. It calculates some measure of “greenness” calculated at street level. It doesn’t take into account the 70ft silver maple in my back yard. It might be useful tool for filling in the street canopy with new plantings, but a metric which in the words of the website  “by which to evaluate and compare canopy cover” it is clearly not. The problem is compounded when the media, run articles and misinterpret the data. Even cities publish the findings, as shown on the City of Toronto website, proclaiming “Toronto was named one of the greenest cities in the world”. They fail to mention that only 25 cities are shown, and mislead readers with statements like “Each city has a percentage score that represents the percentage of canopy coverage”, which is clearly not the case. Besides you can hardly calculate the GVI of only 25 cities and call it a “world ranking”.

P.S. For those interested, there is an abundance of literature relating to actual canopy cover estimation using airborne LiDAR, aerial imagery, and satellite imagery.