linux – Interprocess shared memory in multi-core CPU

Question: Question:

I have a question about interprocess shared memory in a multi-core CPU environment.

First, as a structure of a multi-core CPU

    +----------+ +----------+
    |  コア1   | |  コア2    |
    +----------+ +----------+
    |キャッシュ | |キャッシュ |
    +----------+-+----------+
    |        メモリ          |
    +-----------------------+

And so on, there is a cache for each core.

Now consider the shared memory for between two applications (App1 and App2).
Suppose App1 runs on core 1 and App2 runs on core 2.

Even if App1 writes to shared memory without thinking
App2 may not be able to read its contents.

What App1 wrote may still be in the core 1 cache,
Even if it is written to memory, it may not be filled in the core 2 cache.

There are two possible ways to solve this problem:
1. Write to memory without going through the cache and read from memory without going through the cache
2. Pass through the cache Write from the cache to memory in chunks,
When reading, fill the cache from memory.

Measure 1 is slow, but you can do it without thinking.
The second strategy is fast, but requires a cache operation.

question:

What is the actual situation on Windows and Linux?

Measures on Windows 1

CreateFileMapping () to flProtect argument

A. Memory is accessed directly without going through the cache only when SEC_NOCACHE is specified.
B. Even if you do not specify it, the memory is directly accessed without going through the cache.
C. Other

Which one?

Measure 2 on Windows

Is it via cache if SEC_NOCACHE is not specified in the flProtect argument in CreateFileMapping ()?
Writing from cache to memory, filling from memory to cache
Use FlushViewOfFile ()?

Linux policy 1

Access to the address returned by mmap () or shm_open () is always written in memory without going through the cache and read out of memory without going through the cache?

Measure 2 on Linux

Access to the address returned by mmap () or shm_open () is always via cache,
Use msync () to write from cache to memory and fill from memory to cache?

Answer: Answer:

The CPU cache is consistent with cache coherency , so you don't have to worry about it.
For example, in the case of an invalidated cache, when App1 writes to the core 1 cache, the core 1 cache notifies the core 2 cache that the value at that address has been updated. In response, Core 2 considers the cached value for that address to be invalid. When App2 tries to read the value of the address in core 2, it is regarded as uncached and the latest value is read from the memory again.


It seems that shared memory is regarded as a special existence from questions and comments, but it is fundamentally misunderstood.
Multithreading is the norm for modern applications. A thread is a unit that shares and executes memory within a process, and is always in a shared memory state with multithreading. Therefore, both the processor and OS are designed for multithreading and shared memory.

Scroll to Top