## Question:

The task is as follows:

Enter an integer a and print the sum of the squares of all numbers from 1 to a in increments of 0.1. (1 ^ 2 + 1.1 ^ 2 + 1.2 ^ 2 + … + a ^ 2)

The solution algorithm is as follows:

```
result := 0
while (a >= 1) do begin
result := result + a * a;
a := a - 0.1;
end;
```

At the end of the program, the result should be:

```
При a = 1.1 result = 2.21
При a = 3 result = 91.7
```

The variable `a`

, in any case, at the end of the program must be equal to `0.9`

.

The problem is as follows. For `a = 1.1`

everything is correct, but starting from `1.2`

onwards, the program results in 1 less than necessary: `a = 3; result = 90.7`

, while at the end of the program execution, `a = 0.9(9)`

.

If we modify the algorithm: the variable `a`

after input is multiplied by 10, and then in the loop we square it not with it, but with the expression `a / 10`

, then everything starts to work correctly (the second line in the loop here will be: `a := a - 1`

).

What could be the problem?

## Answer:

The problem is that numbers are binary. The decimal value 0.1 cannot be represented in binary by a final fraction, it is equal to 0.0001100110011 … _{2} or 0.0 (0011) _{2} in the standard notation of rational fractions.

Double-precision floating-point numbers have 53 binary digits after the decimal point, the infinite fraction is discarded. As a result, instead of the exact value 0.1, we have the number 0.1000000000000000055511151231257827021181583404541015625. If you print it with not very great precision, you will get 0.1, but this is only the result of rounding. Rounding to the 17th decimal place results in 0.100000000000000001, and the last decimal place is used in the calculations. So if you subtract 0.1 from 1.2 twice, you get a number less than 1.

There is no simple solution to such problems for all occasions. The simplest thing in your case is to emulate fixed-point numbers, for example, as you did.

You can simplify the calculations by counting the sums of the squares of integers and dividing the result by 100:

a ^{2} + (a + 0.1) ^{2} + (a + 0.2) ^{2} + … =

(10a) ^{2/100} + (10a + 1) ^{2/100} + (10a + 2) ^{2/100} + …

You can use libraries that handle rational numbers, decimal numbers (like the `decimal`

type in C #), or high precision numbers.