On page 229, the book explains this about Petersons Solution:< /p>
Because of the way modern computer architectures perform basic machine
language instructions, such as load and store, there are no guarantees
that Peterson’s solution will work correctly on such architectures.
I read this on Wikipedia and found that this seems to be the closest to an explanation:
Most modern CPUs reorder memory accesses to improve execution efficiency. Such processors invariably give some way to force ordering in a stream of memory accesses, typically through a memory barrier instruction. Implementation of Peterson’s and related algorithms on processors which reorder memory accesses generally requires use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don’t reorder instructions
I can’t understand what this means, or even the answer.
So why is Peterson’s solution not guaranteed to work on modern architectures?
On a system with multiple CPUs, the algorithm may not work because the program sequence of events that occur on one CPU may be different on another CPU Ground awareness.
This may lead to early entry into the critical section (when it is still used by another thread) and early exit from the critical section (when the work on shared resources has not been completed).
< p>Therefore, you need to make sure that the sequence of events that occur inside the CPU looks the same on the outside. Memory barriers can ensure this serialization.
I am studying Silberschatz, Galvin And Gagne’s Operating System Concepts.
On page 229, the book explains this about Petersons Solution:
Because of the way modern computer architectures perform basic machine
language instructions, such as load and store, there are no guarantees
that Peterson’s solution will work correctly on such architectures.
我I read this on Wikipedia and found that this seems to be the closest to an explanation:
Most modern CPUs reorder memory accesses to improve execution efficiency. Such processors invariably give some way to force ordering in a stream of memory accesses, typically through a memory barrier instruction. Implementation of Peterson ‘s and related algorithms on processors which reorder memory accesses generally requires use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don’t reorder instructions p>
I can’t understand what this means, or even the answer.
So why is Peterson’s solution not guaranteed to work on modern architectures?
On a system with a CPU, Peterson’s algorithm can be guaranteed to work, because the behavior of the program itself is observed in program order.
p>
On systems with multiple CPUs, the algorithm may not work because the program sequence of events that occur on one CPU may be perceived differently on another CPU.
This may be possible It will cause early entry into the critical part (when it is still used by another thread) and early exit of the critical part (when the work of sharing resources has not been completed).
Therefore, it is necessary to ensure that events occur inside the CPU The order looks the same externally. Memory barriers can ensure this serialization.