What data type (double, float or decimal) should I use to represent currency in .NET with C#?

Question:

Although I have a sense of what would be best to use, I ask this question for didactic purposes as I see many examples of people using double in C#. But I've had problems with double for currency calculations and I've read in several places that the type loses precision in some cases.

So in what situations would it be best to use each type?

Answer:

Decimal is the ideal type for calculating values. It has a huge range (79,228,162,514,264,337,593,543,950,335 to -79,228,162,514,264,337,593,543,950,335) and has one of the smallest margins of error for rounding.

Double is best suited for general scientific calculations, where the margin of error is not negligible, but it is tolerable. Despite having a larger range of values, the mantissa and characteristic calculation produce known rounding problems.

Float is a Double with fewer bytes for representation, therefore, with a small range of values ​​and precision and rounding issues similar to Double .

Scroll to Top