Question: Question:
I'm not in any particular trouble, but I'm curious, so I have a question.
The actual situation of int_fast16_t / int_fast32_t is different between Windows and Linux. Which is faster, 32-bit integer or 64-bit integer operation on x64 .
(I don't understand what the material in Note 5 on Wikipedia is for comparison.)
| Linux(64) | Windows(64) | FreeBSD(64)
-------------+----------------+----------------+-------------
int_fast8_t | signed char(8) | signed char(8) | int(32)
int_fast16_t | long(64) | int(32) | int(32)
int_fast32_t | long(64) | int(32) | int(32)
int_fast64_t | long(64) | long long(64) | long(64)
The following is a program that investigated the actual situation of int_fastN_t.
#include <climits>
#include <cstdint>
#include <type_traits>
#include <iostream>
#include <sstream>
using namespace std;
#define PRINT_SAME_TYPE(d_type) \
do { \
ostringstream ostream; \
ostream << #d_type << " = "; \
if ( is_same<d_type, signed char>::value ) { \
ostream << "signed char(" << CHAR_BIT << ")"; \
} \
else if ( is_same<d_type, short>::value ) { \
ostream << "short(" << sizeof(short) * CHAR_BIT << ")"; \
} \
else if ( is_same<d_type, int>::value ) { \
ostream << "int(" << sizeof(int) * CHAR_BIT << ")"; \
} \
else if ( is_same<d_type, long>::value ) { \
ostream << "long(" << sizeof(long) * CHAR_BIT << ")"; \
} \
else if ( is_same<d_type, long long>::value ) { \
ostream << "long long(" << sizeof(long long) * CHAR_BIT << ")"; \
} \
else { \
ostream << "unknown"; \
} \
cout << ostream.str() << '\n'; \
} while ( false )
int main()
{
cout << "sizeof(char) = " << CHAR_BIT << '\n';
cout << "sizeof(short) = " << sizeof(short) * CHAR_BIT << '\n';
cout << "sizeof(int) = " << sizeof(int) * CHAR_BIT << '\n';
cout << "sizeof(long) = " << sizeof(long) * CHAR_BIT << '\n';
cout << "sizeof(long long) = " << sizeof(long long) * CHAR_BIT << '\n';
cout << "sizeof(void *) = " << sizeof(void *) * CHAR_BIT << '\n';
PRINT_SAME_TYPE(int_fast8_t);
PRINT_SAME_TYPE(int_fast16_t);
PRINT_SAME_TYPE(int_fast32_t);
PRINT_SAME_TYPE(int_fast64_t);
PRINT_SAME_TYPE(int_least8_t);
PRINT_SAME_TYPE(int_least16_t);
PRINT_SAME_TYPE(int_least32_t);
PRINT_SAME_TYPE(int_least64_t);
}
Execution result on Linux (64):
sizeof(char) = 8
sizeof(short) = 16
sizeof(int) = 32
sizeof(long) = 64
sizeof(long long) = 64
sizeof(void *) = 64
int_fast8_t = signed char(8)
int_fast16_t = long(64)
int_fast32_t = long(64)
int_fast64_t = long(64)
int_least8_t = signed char(8)
int_least16_t = short(16)
int_least32_t = int(32)
int_least64_t = long(64)
Execution result on Windows (64):
sizeof(char) = 8
sizeof(short) = 16
sizeof(int) = 32
sizeof(long) = 32
sizeof(long long) = 64
sizeof(void *) = 64
int_fast8_t = signed char(8)
int_fast16_t = int(32)
int_fast32_t = int(32)
int_fast64_t = long long(64)
int_least8_t = signed char(8)
int_least16_t = short(16)
int_least32_t = int(32)
int_least64_t = long long(64)
Integers that should be 16 bits or more have long used int types, but in the 64-bit machine era, it seems that int types are no longer "the natural size of words on that machine", so from now on. I was wondering what to use instead of the int type.
When I examined FreeBSD, which uses the LP64 model as well as Linux, the results were different from both Linux and Windows. It doesn't seem to have anything to do with the data model.
Execution result on FreeBSD (64):
sizeof(char) = 8
sizeof(short) = 16
sizeof(int) = 32
sizeof(long) = 64
sizeof(long long) = 64
sizeof(void *) = 64
int_fast8_t = int(32)
int_fast16_t = int(32)
int_fast32_t = int(32)
int_fast64_t = long(64)
int_least8_t = signed char(8)
int_least16_t = short(16)
int_least32_t = int(32)
int_least64_t = long(64)
Answer: Answer:
There are some explanations in the Intel® 64 Architecture and IA-32 Architecture Optimization Reference Manual, Chapter 9, 64-bit Mode Coding Guidelines , published by Intel.
For most instructions, the default operand size is 32 bits.
Assembly / Compiler Coding Rule 65 (impact H, generality M). In 64-bit mode, use 32-bit instructions to reduce code size unless 64-bit instructions are required to access 64-bit data or additional registers.
As you can see, 32bit is almost the default even in 64-bit mode. In order to perform 64-bit operation, the instruction becomes one byte longer, which reduces the cache efficiency.
For in-depth stories, read this book and this chapter.
Why did Linux (both Clang and GCC) use int_fast16-32_t = long (64) …?
Unfortunately, the compiler has no choice. It only follows the size determined by the platform.
So, I'm not sure where it was defined, but in LSB 3.2 decided by Linux Standard Base , int_fast16_t
= int_fast32_t
= 64bit was defined for the AMD64 architecture.