Limit digits in a repeating number in Python

Question:

Today I started using python and I wonder … If I put a print(10/3) the answer is limited to

3.3333333333333335

because obviously you couldn't write an infinite number … But is there a way to change the number of digits you write? For example, instead of writing 16 decimal places, write 20, or 3, or 45 (that I can regulate the number of digits, in conclusion)

Answer:

The problem is not so much with how many decimal places you want to display the number (which you can control with format strings, like "{:.10f}".format(10/3) for example, as how precisely it is actually stored.

By default, "float" type numbers are stored in a format called IEEE-754, which uses 32 or 64 bits (depending on whether the precision is single or double) to store any real number. Python uses double precision. Naturally, in 64 bits, which is the highest possible precision, a maximum of 2 ^ 64 different numbers can be represented, which is very far from the infinity of order Aleph1 that are the real numbers. That is, there are infinite numbers that cannot be represented in this format.

When you work with very small numbers (of the type 0.0000000 … 01) you have a very high precision, since the numbers whose integer part is zero are represented in a special way in that format, being able to reach the order of 2 ^ (- 1074), that is, correct up to decimal 324, but as the numbers get larger, the precision drops, since the number of bits available to represent the figures is constant, regardless of where the comma is.

Thus, for those of the type 1.00000 … 01 (which no longer has a zero integer part), only up to approximately decimal 16 is stored correctly and as the integer part increases in size, the number of exact decimal places in the part fraction is decreasing.

The result of 10/3 is stored correctly only up to decimal 15 and from there it is already wrong, as you can see if you try to show it with 20 decimal places:

>>> print("{:.20f}".format(10/3))
3.33333333333333348136

In many scientific fields, the precision provided by the Python float type (equivalent to the double of C) is sufficient, since it is also common in science that a lot of precision is wanted when working with small numbers, and a larger error can be tolerated when working with large numbers, so that the relative error is constant.

However, if you wanted to have absolute control over precision, you should stop using the float type and switch to Decimal . In this type you have arbitrary precision, set by the programmer. Unfortunately it is more cumbersome to operate. See, for example, how to calculate 10/3 with precision of 30 correct decimal places:

>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 30
>>> r = Decimal(10)/Decimal(3)
>>> print(r)
3.33333333333333333333333333333

Although with format you can reduce the number of decimal places displayed, if you try to increase it above 30 in this case you will discover the error:

>>> print("{:.10f}".format(r))
3.3333333333
>>> print("{:.35f}".format(r))
3.33333333333333333333333333333000000

Finally comment that you also have the fraction type, which does not try to perform the division but stores the numerator and denominator separately, which avoids rounding errors while working with fractions (but they will appear again when trying to convert the result to float )

>>> from fractions import Fraction
>>> f = Fraction(10,3)
>>> print(f)
10/3
>>> print(float(f))
3.3333333333333335
Scroll to Top