In a previous post we looked at the fallacy that floating-point numbers are real. So how does this effect calculations? The representational error caused by storing certain decimal fractions will lead to answers which are not as they should be. Consider the following loop to sum the value 0.1, 1000 times (in Python).

sum = 0.0
for i in range(1000):
sum = sum + 0.1

The answer should be 100, but will actually be **99.9999999999986**. Write similar code using a **float** in C:

float sum=0.0;
int i;
for (i=1; i<=1000; i=i+1)
sum = sum + 0.1;
printf("%f\n", sum);

The answer is again a not-so-quite-what-you-thought **99.999046**. Change the **float** to **double,** and the problem magically disappears.

### Like this:

Like Loading...

*Related*