Question:
When I use a short number for long
, like long long1 = 9797;
, the number is accepted even without using the L
suffix. However, when placing a larger number – such as its minimum and maximum values, for example – the value is only accepted as long
if it uses the suffix L
, otherwise it says it is out of range .
I first used a smaller value that was accepted without problems and without the need for a suffix, and then larger values, which were only accepted with L
:
public class Dúvida_sobre_long {
public static void main(String[] args) {
long long1 = 9797;
long long2 = 922337203685477807; // Erro de compilação nesta linha.
long long3 = 922337203685477807L;
System.out.println(long1);
System.out.println(long2);
System.out.println(long3);
}
}
The compilation error is this:
The literal 922337203685477807 of type int is out of range.
I understand that without the L
suffix, the value is interpreted as int
, but I would like to know why this happens with larger values, since I specified that the variable is long
?
Answer:
Why is the language's creators decided that way. There's no better explanation 🙂 It's in the specification .
In fact what you're declaring there is an int
literal that is cast by the compiler implicitly. So it's reserving a space of 8 bytes, the size of a long
type and storing an integer value that would only need 4 bytes, but the rest is padded with zeros, so it's the same. There's no execution cost, it's just something the compiler has to deal with when building the code.
Could they have required putting the suffix L
even in cases of "low" values? Yes, they could, but they didn't, they thought it wasn't necessary. To me it's inconsistent, but that's how it is and you should follow these rules. If you want to be more consistent put the L
where you can even when you don't need to.
Could they have stopped requiring the suffix on large values at least there in the declaration or in cases that are unambiguous and would infer that it's a really long one? could. They probably thought it was that bad. Maybe they thought I would add a snippet of code to the compiler, and therefore a runtime to handle this that they didn't think was worth it.
I think it's a negligible cost compared to what they already do and due to the inconsistency of the compiler's behavior, sometimes it needs to understand what's ahead, sometimes it doesn't want to do this. But Java started out wanting a low cost, without inferring anything. Today it already infers a few things.
I disagree with Victor Stafusa's conclusion, even though the answer is correct and very good. If they wanted to simplify the compiler they would require suffixing every declaration in a long
. Today the compiler makes exceptions. But exceptions are not the end of the world. Close inference (from a code analysis point of view) does not create great difficulties for the compiler. Complicated is inferring from something that is far away. And something like inference, at least in this case is done anyway. After all he has to decide that he has no ambiguity and accept without the suffix. The compiler would be simpler if everything was required, even a 0L
. Everything in his answer shows that they haven't simplified it that much. Either it really simplifies, or it accepts to solve everything, one foot in each canoe.
Part of what happens is also because Java was based on languages that handled it this way.