Everything starts from the state change of the task (stroke or thread)
Computers run from a single program to multiple programs and batch systems to time-sharing systems, all in order to use the computer more efficiently, now we discuss how the computer works in the case of a single core, the execution of each task All are assigned a time segment. When the time is up, a system interrupt will be initiated to inform the system, so that the CPU can perform other tasks. Since each time segment is very short, we will feel as if all tasks are running at the same time. But in fact, these tasks are executed one after the other in order, we call this phenomenon parallel (Chinese term: concurrency).If we initiate a system call that blocks the current task during the time segment, all relevant information about the current task will be saved and then put into the waiting queue. The task status isblockthe scheduler will assign other tasks for us and configure the CPU for its use, and thisTasks blocked by the operating systemThe system call initiated by it will process the data exchange through DMA. Waiting for the completion of moving the data, it will issue an I/O interrupt to notify the operating system. At this time, the operating system will resume execution from the place where the task was suspended.
The following is the task status flow chart:
parallel and parallel
Concurrency, Chinese term for concurrency
Tasks are allocated CPUs to perform processing at different points in time, that is, tasks cannot be executed simultaneously at the same point in time.
Parallelism, Chinese term for parallelism
Each task can be assigned to a different CPU to perform processing, that is to say, at the same point in time, the tasks assigned to the CPU can be executed at the same time.Parallelism can only be achieved based on multi-core processors。
Use a diagram to illustrate parallelism and parallelism:
I/O and CPU-oriented tasks
CPU execution time is short, and I/O execution time is long. (The priority is higher than the CPU-oriented task)
CPU-bound tasks (CPU-bound)
CPU execution time is long, and I/O execution time is short.
Call blocking I/O
operating system level
After the task calls blocking I/O, the system will not return the result, but will save the information related to the current task, and return the CPU usage rights to let the CPU execute other tasks, and the task enters the waiting queue, and the state is blocked at this time. And wait for the controller that handles the I/O to process the data and issue an interrupt to inform the system, the system will return the result.
Because the task is in the blocking state at this moment, it does nothing but waits. After receiving the result returned by the system, the task can continue to be executed.
operating system level
After the task calls non-blocking I/O, the system will immediately return a result or an error, and the current task will not be blocked. This task is still executed by the CPU and occupies the CPU. Polled non-blocking call.
This task checks the system for results through continuous polling, and the system will continue to return results to the task.
Difference between blocking and non-blocking
- It is very important that blocking I/O cannot perform parallel operations. Here I refer to the program of the task itself, rather than using the multi-stroke/multi-threading of the operating system to achieve parallel operations. For non-blocking I/O, The parallel operation of the program itself can be realized, which is one of the reasons why the non-blocking I/O execution efficiency is relatively high.
- A non-blocking I/O call will return a result or an error immediately, while a blocking I/O call will not return immediately
- The task calling non-blocking I/O still occupies the CPU, so it will not be blocked, while the task calling blocking I/O will release the right to use the CPU and enter the blocking state
What is synchronization?
For example, now there are two functions A and B in the script, in which we call the B function in A. The A function must wait until the B function is executed. After returning the result, the A function can continue to execute. During this period, the A function must be executed. wait.A feature of synchronization is thatwaitthat is to say, it is executed sequentially one after the other.
what is asynchronous？
When an asynchronous method call is issued, it will immediately return an error or result, allowing it to return to the application itself and continue to execute. If the application does not only issue an asynchronous call, they do not need to wait for whoever completes first before executing the next. One, they operate in parallel, and whoever returns the result of the execution is not necessarily the first. This is the characteristic of asynchronous.
Let’s take a look at the result of executing this code:
You can find the asynchronous execution result in the above figure. We issued two asynchronous calls to it. You can see that their return order is inconsistent. This is the feature of asynchronous, which allows them to be processed in parallel without waiting for each other.
- “In-depth and simple node.js”. Park Ling. Beijing. People Post Press.
- [Operating System]Detailed explanation of blocking, non-blocking, synchronous, asynchronous
- Itinerary management “Itinerary and thread concept