Locking mechanism for sharing memory consistency

I am developing a mechanism for exchanging data between two or more processes using shared memory on Linux. The problem is that a certain degree of concurrency control is required to maintain shared memory The data integrity of itself, and because my process may be killed/crashed at a certain moment or time of mine, common lock mechanisms do not work because they may leave the memory in a “locked” state and die Then immediately make other processes wait for the lock to be released.

Therefore, after doing some research, I found that the System V semaphore has a flag named SEM_UNDO, which can restore the locked state when the program fails. But this is not guaranteed to work. Another option is to monitor the PIDs in all processes that may use shared memory and have some control over them when abnormal situations occur, but I am not sure if this is the right way to solve my problem. /p>

Any ideas? ?

Edit: For the sake of illustration, our application needs some kind of IPC mechanism with minimal delay. Therefore, I am open to a mechanism that can handle this request.

< div class="content-split">

I would like to know what source you used saying that SEM_UNDO is not guaranteed to work. I have not heard of it before. I seem to remember to read one The article claims that Linux’s SYSV IPC is generally wrong, but that was a long time ago. I want to know if your information is just a artifact of the past.

Another thing to consider One thing (if I remember correctly) is that the SYSV semaphore can tell you the PID of the last process that performed the semaphore operation. If you hang up, you should be able to query to see if the process holding the lock still exists. Because Any process (not just the process holding the lock) can fiddle with the semaphore, so you can control it this way.

Finally, I will introduce message queues. They may not be suitable for your speed Requirement, but they are usually not much slower than shared memory. Essentially, they have to manually complete all operations with SM anyway, but the operating system can complete all operations. Through synchronization, atomicity, ease of use, and free You can get almost the same speed.

I am developing a mechanism for using shared memory on Linux with two or more Exchange data between processes. The problem is that a certain degree of concurrency control is required to maintain the data integrity of the shared memory itself, and because my process may be killed/crashed at a certain moment or time, the common lock mechanism Doesn’t work because they may leave the memory in a “locked” state and make other processes wait for the lock to be released immediately after death.

So, after doing some research I found System V signals There is a flag named SEM_UNDO, which can restore the locked state when the program fails, but this does not guarantee work. Another option is to monitor the PIDs in all processes that may use shared memory, and treat them when abnormal conditions occur Take some control, but I’m not sure if this is the right way to solve my problem.

Any ideas? ?

Edit: For the sake of illustration, our application needs some kind of IPC mechanism with minimal delay. Therefore, I am open to a mechanism that can handle this request.

< p>

I would love to know what source you used to say that SEM_UNDO is not guaranteed to work. I have not heard of it before. I seem to remember reading an article claiming that Linux’s SYSV IPC is generally wrong , But that was a long time ago. I want to know if your information is just a artifact of the past.

Another thing to consider (if I remember correctly) is The SYSV semaphore can tell you the PID of the last process that performed the semaphore operation. If you hang, you should be able to query to see if the process holding the lock still exists. Because of any process (not just the process holding the lock) You can fiddle with semaphores, so you can control it this way.

Finally, I will introduce message queues. They may not suit your speed requirements, but they are usually not slower than shared memory Many. In essence, they have to manually complete all operations with SM anyway, but the operating system can complete all operations. With synchronization, atomicity, ease of use, and free, thoroughly tested mechanisms, you can get almost the same Speed.

Leave a Comment

Your email address will not be published.