Why some testing is impossible

Lufthansa Flight 2904 was an Airbus A320-200 which overran the runway, in Okęcie International Airport on September 14, 1993. It was a flight from Frankfurt, Germany to Warsaw, Poland. Incorrect weather information caused the aircraft’s right gear touched down 770 m from the runway threshold. The left gear touched down 9 seconds later, 1525 m from the threshold. Only when the left gear touched the runway did the ground spoilers and engine thrust reversers deploy.

The accident was partially attributable to the design of the software onboard the aircraft. The landing system was designed to ensure that the thrust-reverse system and the spoilers are only activated in a landing situation, all of the following conditions have to be true for the software to deploy these systems:

1. there must be weight of over 12 tons on each main landing gear strut
2. the wheels of the plane must be turning faster than 72 knots (133 km/h)
3. the thrust levers must be in the idle (or reverse thrust) position

In the case of the Warsaw accident neither of the first two conditions were fulfilled, so the braking system was not activated. The first condition was not fulfilled because the plane landed inclined, in order to counteract anticipated windshear. The 12 tons of pressure needed to trigger the sensor was not attained. The second condition was not achieved due to the plane hydroplaning on the wet runway. When the second wheel did make contact, at 1525m, the ground spoilers and engine thrust reversers activated – however the plane was already 125m beyond the half way point of the 2800m runway.

This illustrates the fact that sometimes algorithms are designed which fail to take every eventuality into account. If the situation had never been encountered before, then it could not be incorporated into the design, nor could test cases be created. The software performed as designed.

NB: That’s not to say that testers couldn’t have thought of a series of worst-case scenarios. Interview some pilots and explore some real-world landing scenarios – use these to test the system to see how it handles them.


3 thoughts on “Why some testing is impossible

  1. webdevjourney says:

    Sorry to disagree but it is not that the ‘testing [was] impossible’ in this situation, it just wasn’t a valid test based on the specification. If the spec had stated something that catered for this situation – say something like:
    The thrusters should reverse when there is > 6 tons one one landing gear,
    then this would have been tested. I think you should rename this post to say “The dangers of poor specifications”

  2. spqr says:

    Thanks for the comment. However you presume that every test case is covered in the specification, which is extremely unlikely. Many testing strategies such as Black-box testing work on the principle that test cases are built around the specifications and requirements. However, there are other testing strategies that aren’t bound by specifications, which can be poorly written, or in some cases contain incorrect assumptions. There may be a difference in how a system is described on paper, and how it is intended to behave, which can be expressed in the notions of verification and validation. Verification determines if the software has been properly according to the specifications. Validation is based on evaluating software based on user need or simulated real world conditions. To properly test software, one often has to think outside the box using techniques such as exploratory-testing or experience-based testing. Case in point – the iPhone 5s fingerprint scanner/verification software. Based on the requirements alone, the system probably works quite well. But I wonder if anyone thought to test the system on fake fingerprints, or register a cat’s paw, or heel of a palm.

  3. webdevjourney says:

    I’m sorry to disagree yet again, but the IEEE definition of validation is:
    ‘Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]’
    Note the phrase ‘specified requirements’.
    If it’s not in the spec, don’t won’t test it..HOWEVER..any test analyst should have brought up the issue up on release of the spec that it does not cater for the situation where a plane lands on one gear only.
    At that point – the test analyst should participate in the debate with the requirements owners and the budget holder as to how of if this situation should be catered for in the spec. If they agree that this situation is “worth” specifying in the requirements, then the spec will be revisioned, re-issued, the defect will be closed as fixed and the scenario tested. If the requirement owners and budget holder decide for example that while it is a possible situation the costs benefits analysis of adding it does not weigh up – or to add this functionality means we will miss the deadline for delivery etc…then this scenario will not be added to the spec, the defect will be closed as ‘rejected’ and the tester will not test the situation BECAUSE if it is not in the spec on finding a defect, the tester will have nothing to cite against [N.B. a defect should always cite the specific requirement that it falls foul of].

    The position that you seem to be taking is that a testers job is to test everything and anything that they can think of at the end of the development cycle when the software has been coded. This is not the case. Efficient testing is done throughout the lifecycle and issues like missing scenarios are picked up on review of the spec by the test analysts and the debate as to whether they are included in the spec is made in the early stages, otherwise the costs / time to add these things at the end of the development lifecycle would be horrendous.

    On a different but similar thread:
    So – if we take your case of apple iPhone 5s fake fingerprints or a cats paw scenarios….what monetary value do you think apple would place on ensuring their software did not work with a fake fingerprint or a cats paw? What increase in market share would they achieve by including these restrictions in their requirements and coding for them? Would the benefits of these outweigh the costs? Or would the fact that regardless of what it recognises, providing that the same UNIQUE part must be touched against the sensor to unlock it then would that not be sufficient. You be the Budget holder of that development and you decide! And…if you were a tester in my team and started raising defects saying that it allowed you to register you cats paw as the fingerprint…when neither cats nor animals were mentioned in the spec…I’d be putting you on leave until you recovered.

    Please note that my statement above is not meant to be disrespectful to the person who made the cats paw video – sure its fun to discover that a cats paw can be the registered fingerprint – I might try it with my own cat.

    Finally, with all due resect, I don’t know if you work in testing, but often people see testing as something it’s not. It is about analysis, and influencing and assessing risk – the physical execution of tests is the end result – it accounts for about 20% of the effort. The physical set of tests should be as small and tight as possible while ensuring full coverage and each one should be tied back explicitly to a sentence or paragraph or requirement number in the spec or design. Spurious negative testing should be minimal since all of these possibilities should have been discussed on release of the specs / designs and dealt with then.Yes, testers have to be very creative and think out of the box – but they have to be able to understand the bigger picture like business drivers, costs and benefits. Good testers improve the specs, which improves the code, which results in the less defects.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.