Reference:
Analysis of the Go runtime scheduler paper
Goroutines communicate through a construct known as channels,
which are essentially synchronized message queues.
Go Runtime manages
Multiple threads are often necessary to ensure that goroutines
are not unnecessarily blocked.
When one goroutine makes a blocking call, the thread running it must block.
(不一定 有些block為 userspace, 則 thread可以不用block.
e.g:
M's top is set to GOMAXPROCS.
--
3 types used to handle scheduler/goroutine
If there's a channel block inside the goroutine, we simply mark
the G into wait state and put it back into the run queue, while the
channel is unblocking, we put the channel back to scheduling.
One problem is the scheduler's excessive reliance on the global Sched lock.
In order to modify the queues of M’s and G’s, or any other global Sched field for that matter, this single lock must be held.
This creates some problems when dealing with larger systems, particularly “high throughput servers and parallel computational programs",
which causes the scheduler to not scale well.
[issue] Sched lock:
Introduce logic processor, P.
There are exactly GOMAXPROCS P’s, and a P would be another required resource for an M in order for that M to execute Go code.
Whenever a new G is created, it is placed at the back of the queue of the P on which it was created, thus ensuring that the new G will eventually run.
When a P does not have any G’s in its queue, it will randomly pick a victim P and steal half of the G’s from the back of the victim’s queue.
[issue] M’s continuously blocking and unblocking:
Three main events that can cause an M to be temporarily incapable of running Go code:
the M must first ensure that there is at least one spinning M,
unless all P’s are busy.
Why?
Thus the blocking M's P's running queue can hand off tasks to the spinning M.
[issue] Creating a goroutine needs lot's of memory:
Not allocating the G and stack for a new goroutine unless they are really required.
We require just six words for the creation of a goroutine that runs to completion without making function calls or allocating memory.
[issue] Locality to a CORE CPU:
P’s are an abstraction created by the run-time that the OS knows nothing about, whereas M’s represent kernel threads.
Most modern kernels will provide for affinity between threads and physical processors. Hence, better G to M locality will give us better cache performance.
Scheduling:
Put a job's tasks spread out into P's. (they are not dependent)
If can't, put into next row(level) of P's.
Like k8s or any batch scheduling technique.
If Go's P structs were used, instead of processors, this idea could work to schedule multiple goroutines that use the same channels simultaneously.
This has the potential to reduce the time spent blocking M's and G's;
however, this may require significant changes to the channel infrastructure.
Contention Aware Scheduling:
goroutines using same channel should schedule into same P.
As for task stealing from other P, a P should steal a group of
G's which are grouping together due to using the same channel.
Type system:
“A type system that was used for classification by behavior rather than implementation, or naming.”
Analysis of the Go runtime scheduler paper
Goroutines communicate through a construct known as channels,
which are essentially synchronized message queues.
Go Runtime manages
- scheduling,
- garbage collection,
- and the runtime environment for goroutines among other things.
Multiple threads are often necessary to ensure that goroutines
are not unnecessarily blocked.
When one goroutine makes a blocking call, the thread running it must block.
(不一定 有些block為 userspace, 則 thread可以不用block.
e.g:
- network input
- sleeping
- channel operations
- blocking on primitives in the sync package.
)
M's top is set to GOMAXPROCS.
--
3 types used to handle scheduler/goroutine
- THE G STRUCT (aka. goroutine)
A G struct represents a single goroutine. It contains the fields necessary to keep track of its stack and current status. It also contains references to the code that it is responsible for running. - THE M STRUCT (aka. thread)
The M struct is the Go runtime’s representation of an OS thread.
It has pointers to fields such as the global queue of G’s, the G that it is currently running, its own cache, and a handle to the scheduler. - THE SCHED STRUCT (not logical processor, just the scheduler, this is old design with Sched Sched lock)
The Sched struct is a single, global struct[9] that keeps track of the different queues of G’s and M’s and some other information the scheduler needs in order to run, such as the global Sched lock.
There are two queues containing G structs, one is the runnable queue where M’s can find work, and the other is a free list of G’s. There is only one queue pertaining to M’s that the scheduler maintains; the M’s in this queue are idle and waiting for work. In order to modify these queues, the global Sched lock must be held.
- garbage collection
- scheduling
- represents the user's Go code
If there's a channel block inside the goroutine, we simply mark
the G into wait state and put it back into the run queue, while the
channel is unblocking, we put the channel back to scheduling.
One problem is the scheduler's excessive reliance on the global Sched lock.
In order to modify the queues of M’s and G’s, or any other global Sched field for that matter, this single lock must be held.
This creates some problems when dealing with larger systems, particularly “high throughput servers and parallel computational programs",
which causes the scheduler to not scale well.
[issue] Sched lock:
Introduce logic processor, P.
There are exactly GOMAXPROCS P’s, and a P would be another required resource for an M in order for that M to execute Go code.
Whenever a new G is created, it is placed at the back of the queue of the P on which it was created, thus ensuring that the new G will eventually run.
When a P does not have any G’s in its queue, it will randomly pick a victim P and steal half of the G’s from the back of the victim’s queue.
[issue] M’s continuously blocking and unblocking:
Three main events that can cause an M to be temporarily incapable of running Go code:
- when a new G is spawned,
- an M enters a syscall,
- an M transitions from idle to busy.
the M must first ensure that there is at least one spinning M,
unless all P’s are busy.
Why?
Thus the blocking M's P's running queue can hand off tasks to the spinning M.
[issue] Creating a goroutine needs lot's of memory:
Not allocating the G and stack for a new goroutine unless they are really required.
We require just six words for the creation of a goroutine that runs to completion without making function calls or allocating memory.
[issue] Locality to a CORE CPU:
P’s are an abstraction created by the run-time that the OS knows nothing about, whereas M’s represent kernel threads.
Most modern kernels will provide for affinity between threads and physical processors. Hence, better G to M locality will give us better cache performance.
Scheduling:
Put a job's tasks spread out into P's. (they are not dependent)
If can't, put into next row(level) of P's.
Like k8s or any batch scheduling technique.
If Go's P structs were used, instead of processors, this idea could work to schedule multiple goroutines that use the same channels simultaneously.
This has the potential to reduce the time spent blocking M's and G's;
however, this may require significant changes to the channel infrastructure.
Contention Aware Scheduling:
goroutines using same channel should schedule into same P.
As for task stealing from other P, a P should steal a group of
G's which are grouping together due to using the same channel.
Type system:
“A type system that was used for classification by behavior rather than implementation, or naming.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.