Question:
I write the number 0xBCA8F85D into float myVar, if I represent this number in decimal form in the calculator, I get 3165190237, but if I printf ("% f", a); result 3165190144, please explain why this is happening?
Answer:
The IEEE-754 standard describes a 32-bit dotted number as containing 1 sign bit, 8 exponent bits and 23 mantissa bits. In particular, this means that the precision of the stored number is "only" about 7 characters, and the allowed range is about 10 ^ 38.
When writing a number, the mantissa gives you 23 bits to write the number (from 1.0 to 2.0), and the exponent gives you 8 digits to write the exponent. In this case, the exponent is chosen to be minimal (for the highest mantissa accuracy). Let's take a simple example of 2 bits – the mantissa is from 1 to 2 in increments of 0.25, and the exponent is 1/2/4. How can we write 0.19
into such a number? No way, we can only write as close as possible to 1,5 ^ -4
and get 0,197530864197530
. Another closest neighbor will be .. (I don't know for sure, but let's say) 0,1741254
. And then whatever we enter it will be "rounded" to the nearest representable number in our "float5".
Accordingly, in your case, you can only rely on the first 7 digits – 3165190***
, and the last three will be "as it turns out".
Why do we need such a type if it cannot even contain an int? Then, what is especially about him is his strength. Usually, when working with numbers, small values are not so important against a background of large ones. You work with billions, you don't pay attention to the few. You work with millimeters – nanometers are not important.