Regarding concurrency and parallelism, it happened not long ago. A group of colleagues went to Family Mart to buy lunch, and brought them back. The company only has one microwave oven for heating, and there are two microwave ovens at Family Mart that can be heated. In other words, our party People go to buy lunch together. This is a concurrent program, and then it can be heated in parallel in Family Mart. However, if you take it back to the company, because there is only one microwave oven (single core), it can only be executed one by one in sequence.
Serial execution
Give a range to an array and get the index of a specific number
1 < span class="lnt">2 3 4 5 6 7
range=0.. .100_000_000number=99_999_999putsrange.to_a.index(number) ?timerubysequential.rb99999999rubysequential.rb4.04s < span class="n">user0.56ssystem98% cpu 4.667total
It takes about 5s to execute this code, using 99% CPU (single core).
Parallel execution
By dividing the range and the fork< in the Ruby standard library /code> method, we can use multiple processes to execute the above code.
Under the multi-core CPU, because the range is split in half, the processing time is also faster. (Numbers have There will be some gaps). Process.wait means waiting for all child processes to complete.
Because of the existence of GIL (global interpreter lock), Ruby MRI needs to use the core of the CPU The only way is multi-process.
Unicorn fork Rails main process spawns multiple workers to handle HTTP requests.
The biggest advantage of this multi-process approach is sufficient. Multi-core CPU is used and memory is not shared between processes, so debugging will be easier, but because memory is not shared, it means that memory consumption will become larger. However, starting from Ruby 2.0, using the COW (Copy On Write) function of the system can be Let fork processes share memory data, and only copy data when the data changes.
Threads
Starting from Ruby 1.9, the thread implementation is changed to the system's Native Threads and threads are also scheduled by the operating system. But due to the existence of the GIL, only one thread of a Ruby process can get the GIL running at the same time. The meaning of the existence of GIL is thread safety. To prevent race conditions, it is easier to implement GIL than to achieve thread safety on data structures.
However, in fact, GIL cannot completely prevent race conditions from occurring =. =
Race Condition
1 2 3 2 3 span> 4 5 6 7 8 9 10 11 12 13 14 15 16
# threads.rb@executed=falsedefensure_executedunless@executedputs"executing!"@executed span> =trueendend threads=10.times.map{Thread.new{ensure_executed}}< span class="n">threads.each(&:join)# The main thread waits for the execution of the child thread to finish before continuing to run back, such as p'done' at the back.?rubythreads.rbexecuting!executing!executing!< /span>executing!
Because of all The threads share a @executed variable, and read (unless @executed) and write (@executed = true) operations are not atomic. That is to say, when the value of this @executed is read in a certain thread, the @executed may have been rewritten in another thread.
< p>GIL and Blocking I/O
If the GIL can't allow multiple threads to run at the same time, then why do multiple threads do it? In fact, there are still high-end places. When When a thread encounters blocking I/O, it will release the GIL to other threads, such as HTTP requests, database queries, disk reads and writes, and even sleep .
Ten threads execute sleep It does not need to be executed for 10s. The execution right of the thread will be handed over to sleep. Compared to using the process, the thread is lighter, and you can even run thousands of threads to handle blocking I/ The O operation is also very useful. However, you have to be careful of race-conditions. If you use a mutex to avoid it, you have to pay attention to the occurrence of deadlocks. In addition, the thread Switching between threads is also costly, so if there are too many threads, time will be spent switching threads.
Puma allows multiple threads to be used in each process, and each process has Respective thread pools. Most of the time, you will not encounter the above-mentioned competition problem, because each HTTP request is processed in a different thread.
EventMachine
EventMachine(EM) is a gem that provides event-driven I/O based on the Reactor design pattern. One advantage of using EventMachine is that when processing a large number of IO operations, there is no need to manually handle multithreading. EM can be in one Process multiple HTTP requests in the thread.
EM.system simulates the I/O operation, and passes in a block as a callback, You can continue to respond to other operations (events) while waiting for the callback. However, because the complex system has to handle the success and failure callbacks, and other events and callbacks may be nested inside the callbacks, it is easy to fall into Callback Hell
Fiber
Used so few, grammar mastered and forgotten, grammar mastered and forgotten =. =
Fiber is a new lightweight running unit since Ruby 1.9. Similar to the suspension of threads and resume execution of code, the biggest difference is that threads are executed due to operating system scheduling, while Fiber is handled by the programmer himself.
# fiber.rbfiber=Fiber.newdo# 3. Fiber.yield hand over the right to execute, and return 1 (here)< /span> Fiber.yield 1# 5. After the execution is complete, return to 22end# 1. After the fiber is created, the code inside will not be executed# 2. Only when resume is called will it be executedputsfiber.resume# 4. Go on here...put sfiber.resume# 5. The fiber has hung up, an error is reportedputsfiber.< /span>resume?ruby fiber.rb12deadfibercalled(FiberError)
Fiber combined with EventMachine can avoid Callback Hell
If I/O operations are performed in Fiber, the entire thread will be blocked by this fiber. After all, it is the reason of GIL. In fact, it does not really solve the concurrency problem, and the syntax of Fiber estimates that I will be Forget it.
Celluloid
Celluloid borrows from Erlang and brings a similar Actor model to Ruby. Each include contains The >Celluloid class has become an Actor with its own thread of execution.
classSheenincludeCelluloiddef span> initialize(name) @name=nameenddefset_status(status)@status= < span class="n">status enddefreport"#{@name } is #{@status}"endendirb (main):009:0>charlie=Sheen.new"Charlie Sheen"=> #irb(main):010 :0>charlie.set_status"winning!"=>"winning!"irb(main):011:0> < span class="n">charlie.report=>"Charlie Sheen is winning!"irb(main):012:0>charlie. span>async.set_status" asynchronously winning!"=>#irb(main):013: 0>charlie.report=>"Charlie Sheen is asynchronously winning!"
< /tr>
Celluloid::Proxy::Async The object will intercept the method call, and then save it to the Actor concurrent object In the call queue, the program can be executed (asynchronously) without waiting for a response. Each concurrent object has its own call queue, and executes the method calls inside one by one in order.
The Actor The biggest difference with Erlang's Actor model is that Erlang variables are immutable, and Ruby does not have this restriction, so the process of message (object) delivery is May be modified, unless freeze?
EventMachine.rundopage=EM::HttpRequest.new('https://google.ca/').getpage.errback{puts"Google is down"}page.callback{url='https://google.ca/search?q=universe.com'about=EM::HttpRequest.new(url).getabout.errback{...}about.callback{...}}end
1 2 3 4 5 6 7 8 9 10
EventMachine.rundopage=EM::HttpRequest.new('https://google.ca/').getpage.errback{puts"Google is down"}page.callback{url='https://google.ca/search?q=universe.com'about=EM::HttpRequest.new(url).getabout.errback{...}about.callback{...}}end
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
EventMachine.rundoFiber.new{page=http_get('http://www.google.com/')ifpage.response_header.status==200about=http_get('https://google.ca/search?q=universe.com')# ...elseputs"Google is down"end}.resumeenddefhttp_get(url)current_fiber=Fiber.currenthttp=EM::HttpRequest.new(url).gethttp.callback{current_fiber.resume(http)}http.errback{current_fiber.resume(http)}Fiber.yieldend
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
EventMachine.rundoFiber.new{page=http_get('http://www.google.com/')ifpage.response_header.status==200about=http_get('https://google.ca/search?q=universe.com')# ...elseputs"Google is down"end}.resumeenddefhttp_get(url)current_fiber=Fiber.currenthttp=EM::HttpRequest.new(url).gethttp.callback{current_fiber.resume(http)}http.errback{current_fiber.resume(http)}Fiber.yieldend
classSheenincludeCelluloiddefinitialize(name)@name=nameenddefset_status(status)@status=statusenddefreport"#{@name} is #{@status}"endendirb(main):009:0>charlie=Sheen.new"Charlie Sheen"=>#irb(main):010:0>charlie.set_status"winning!"=>"winning!"irb(main):011:0>charlie.report=>"Charlie Sheen is winning!"irb(main):012:0>charlie.async.set_status"asynchronously winning!" =>#irb(main):013:0>charlie.report=>"Charlie Sheen is asynchronously winning!"
classSheenincludeCelluloiddefinitialize(name)@name=nameenddefset_status(status)@status=statusend defreport"#{@name} is #{@status}"endendirb(main):009:0>charlie=Sheen.new"Charlie Sheen"=>#irb(main):010:0>charlie.set_status"winning!"=>"winning!"irb(main):011:0>charlie.report=>"Charlie Sheen is winning!"irb(main):012:0>charlie.async.set_status"asynchronously winning!"=>#irb(main):013:0>charlie.report=>"Charlie Sheen is asynchronously winning!"
WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist] SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 3293 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC