Often in the code of programs, numbers are written as a digit with a
.0 at the end. For instance:
double s = 1.0 / 6.0;
var d: Single; begin d := 16.0 + 1.0; end;
var t = 90.0;
Examples in other languages are also possible, but for a start I propose to limit ourselves to these three
What is the point of adding
.0 in these cases?
In what cases can
.0 be omitted, and in what cases does it change the logic of the program's behavior?
Generally speaking, adding
.0 to a number changes its type and converts it from an integer to a fractional (floating point) number. Changing the type of a number in this way affects how it is stored in memory and how and what operations can be performed on it.
In JS, adding
.0 completely meaningless, since. there all numbers are initially fractional.
In Delphi, there is also no need to manually convert integers to fractional ones, since the compiler does a great job of doing this on its own, where it is needed. Division of two integers always returns a fractional number and cannot be assigned to an integer by mistake. For special cases where you need to perform integer division, there is a special
But in C and some other languages, for both integer and ordinary division, the same division operator
/ which behaves differently, depending on the type of the operands:
- if both operands are integers, then the division will also result in an integer (integer division):
double i = 5 / 2;will give
2.0(the fractional part of the operation result is discarded, an integer is obtained, which is then cast to the target type
- if at least one of the operands is fractional, then the result will be fractional:
double i = 5 / 2.0;will give
In C, instead of
.0 in front of a number, you can specify the type of a floating point number (
double ), i.e. perform type conversion:
double i = 5 / (double) 2; Such a construction is used if the operand is not a number, but a variable of an integer type:
int k = 2; double i = 5 / (double) k; // --> i = 2.5