Why some testing is impossible

Lufthansa Flight 2904 was an Airbus A320-200 which overran the runway, in Okęcie International Airport on September 14, 1993. It was a flight from Frankfurt, Germany to Warsaw, Poland. Incorrect weather information caused the aircraft’s right gear touched down 770 m from the runway threshold. The left gear touched down 9 seconds later, 1525 m from the threshold. Only when the left gear touched the runway did the ground spoilers and engine thrust reversers deploy.

The accident was partially attributable to the design of the software onboard the aircraft. The landing system was designed to ensure that the thrust-reverse system and the spoilers are only activated in a landing situation, all of the following conditions have to be true for the software to deploy these systems:

1. there must be weight of over 12 tons on each main landing gear strut
2. the wheels of the plane must be turning faster than 72 knots (133 km/h)
3. the thrust levers must be in the idle (or reverse thrust) position

In the case of the Warsaw accident neither of the first two conditions were fulfilled, so the braking system was not activated. The first condition was not fulfilled because the plane landed inclined, in order to counteract anticipated windshear. The 12 tons of pressure needed to trigger the sensor was not attained. The second condition was not achieved due to the plane hydroplaning on the wet runway. When the second wheel did make contact, at 1525m, the ground spoilers and engine thrust reversers activated – however the plane was already 125m beyond the half way point of the 2800m runway.

This illustrates the fact that sometimes algorithms are designed which fail to take every eventuality into account. If the situation had never been encountered before, then it could not be incorporated into the design, nor could test cases be created. The software performed as designed.

NB: That’s not to say that testers couldn’t have thought of a series of worst-case scenarios. Interview some pilots and explore some real-world landing scenarios – use these to test the system to see how it handles them.

Advertisements

The art of crowdsource-testing

Sometimes you buy a new app for you iPhone or Android device (or any device for that matter), run it for a while, then it crashes. I can think of one book library app I have for my iPhone which just crashes occasionally for no apparent reason. Crashing software is nothing new of course. However what is new on many mobile devices is the lack of notification – on computers you might get a nice message like “the application X has unexpectedly quit“. On mobile devices it often just crashes. You run the program again, it may be fine… or not. Let’s face it, not every app is well tested. Many rely on a process such as user-based crowdsource-testing. People buy the app, it crashes, they report when the crash occurred and what they were doing. Enough people report similar symptoms, the developers attempt to find the bug and fix it. The bug gets fixed, the app gets updated and everyone is happy.

Why do the bugs occur in the first place? Because code is complex, and the more it has to do, the more complex it becomes. The more complex it becomes, the harder it is to test effectively. Some apps just crash because they are memory hogs. Others because they read in a spurious piece of input that they can’t deal with and baulk – in other words, there is little or no defensive programming going on. User-based crowdsource testing is an effective way for small companies to have their software tested in diverse, realistic situations to find the bugs that developers just can’t find.

Let’s face it software isn’t perfect. The closest thing to perfection may be the software that runs critical systems such as driverless trains – but it is also less complex than some apps.

Is programming a craft?

There have been musings over the years as to whether programming is an art or a science. Is code a form of art? Art involves creative skill and imagination – does one need to be creative to write code? Some aspects of designing programs are inherently creative – such as the process of interface design. However programming involves solving problems – which is not art. Is it a science then? Likely not. In the traditional sense of the word science relates to the phenomena of the material universe – life science or physical science. The Science Council in the UK define science as “the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”. Hardly programming. There may be elements of scientific thought used in the problem solving and algorithm design aspects of programming, but programming is not a pure science.

It may however, be a craft, which lies somewhere between an art, which relies on some form of talent, and a science, which relies on knowledge.

A craft, is an activity involving skill in making things by hand. Programming as a craft makes sense because it is a combination of skill, experience, and the use of tools. The programming artisan selects tools appropriate to the task at hand, and builds programs with them. Programmers are more like woodworkers than they are biologists or mathematicians. The only difference is that a woodworker produces a physical object, the programmer produces a virtual entity. Like many artisans, programmers also use tools to make new tools.

There may also be some primitive form of engineering involved, but engineering involves the application of scientific and mathematical principles to practical ends. Not so with programming, because there is often no distinct way of solving a problem. We craft a solution based on our experiences, the tools we know, and the information we can find.