Is code optimization evil?

In Donald Knuth’s 1974 paper Structured Programming with go to Statements¹, he makes the following statement:

“There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

This was the early 1970s, a time where computers still had very little memory, so optimization was key in using all the available resources. Is code optimization still evil? Likely not as much as it once was, in the age of mobile and embedded systems which require efficient code, it way be as important as ever. For example, inefficient code could lead to premature battery drain on a mobile device. In code from inexperienced programmers, it often manifests itself as simple things such as redundancy due to the recomputation of common expressions, or loops containing loop independent expressions. For example, take the simple quadratic equation, which would be transformed into the following statements in C:

r1 = (-b + sqrt(b * b - 4.0 * a * c)) / (2.0 * a);
r2 = (-b - sqrt(b * b - 4.0 * a * c)) / (2.0 * a);

The problem with this code is that each expression contains elements that are repeated: the discriminant (√b²-4ac), and the redundant computation of the subexpression 2a. This code would be more efficient if written as:

denom = 2.0 * a;
sdisc = sqrt(b * b - 4.0 * a * c);
r1 = (-b + sdisc) / denom;
r2 = (-b - sdisc) / denom;

As another example, consider the following piece of code:

x = 3.4;
for (i=0; i<100; i=i+1)
    y = y + a[i] * (x*x + 3.0*x + 2.0);

Here the expression x²-3x+2 is independent of the loop variable i, but would still be independently calculated 100 times. This would be more efficient if replaced by:

x = 3.4;
z = (x*x + 3.0*x + 2.0);
for (i=0; i<100; i=i+1)
    y = y + a[i] * z;

It is sometimes the small things that make all the difference, maybe not in languages such as C, but certainly in languages such as Python which are more susceptible to slow code.

¹ACM Computing Surveys, 6(4), p.268, (1974)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s