Reference:
Getting to Go: The Journey of Go's Garbage Collector
Go GC: Latency Problem Solved
Like C++, Go is a value-oriented language.
Why? Since it's easy to access C/C++ function interface.
Beware, while Golang is a GC language,
any reference to type's data member can prolong
the live of the type instance.
In Golang, they call it 'interior pointers'.
Such pointers keep the entire value(i.e type's instance in C++'s jargon)
live and they are fairly common.
Golang's elf binary contains the whole Golang Runtime.
i.e No more JIT recompilation.
Pro
Reproducibility of program execution is a lot easier which makes moving forward with compiler improvements much faster.
Con
Don't have the chance to do feedback optimizations as you would with a JITed system.
Redundancy wasn't going to scale, redundancy costs a lot.
[golang] Golang's memory management - Eben Freeman [note]
Mark bits are kept on the side and used for marking as well as allocation.
Each word has 2 bits associated with it to tell you if it was a scalar or a pointer inside that word.
It also encoded whether there were more pointers in the object so we could stop scanning objects sooner than later.
We also had an extra bit encoding that we could use as an extra mark bit or to do other debugging things.
This design is valuable for getting this stuff running and finding bugs.
At a high level the Pacer stops the Goroutine, which is doing a lot of the allocation, and puts it to work doing marking.
If the system is in a steady state and not in a phase change, marking will end just about the time memory runs out.
The amount of work is proportional to the Goroutine's allocation. This speeds up the garbage collector while slowing down the mutator.
When all of this is done the Pacer takes what it has learnt from this GC cycle as well as previous ones and projects when to start the next GC.
Reference:
[Design Doc] Go 1.5 concurrent garbage collector pacing
[Proposal] Proposal: Separate soft and hard heap size goal
With the failure experience of which types of GC to adopt,
Escape analysis and Value-orientation succeed.
Today's modern architectures have AES (Advanced Encryption Standard) instructions.
One of those instructions can do encryption-grade hashing and with encryption-grade hashing we don't have to worry about collisions if we also follow standard encryption policies. So hashing is not going to cost us much but we have to load up what we are going to hash.
Reference:
https://mattwarren.org/2016/02/04/learning-how-garbage-collectors-work-part-1
http://factor-language.blogspot.com/2008/05/garbage-collection-throughput.html
https://blogs.msdn.microsoft.com/abhinaba/2009/03/02/back-to-basics-generational-garbage-collection/
https://stackoverflow.com/a/19155441/3850881
Getting to Go: The Journey of Go's Garbage Collector
Go GC: Latency Problem Solved
Like C++, Go is a value-oriented language.
Why? Since it's easy to access C/C++ function interface.
Beware, while Golang is a GC language,
any reference to type's data member can prolong
the live of the type instance.
In Golang, they call it 'interior pointers'.
Such pointers keep the entire value(i.e type's instance in C++'s jargon)
live and they are fairly common.
Golang's elf binary contains the whole Golang Runtime.
i.e No more JIT recompilation.
Pro
Reproducibility of program execution is a lot easier which makes moving forward with compiler improvements much faster.
Con
Don't have the chance to do feedback optimizations as you would with a JITed system.
Knobs to control the GC
- GCPercent
A knob that adjusts how much CPU you want to use and how much memory you want to use. - MaxHeap
Set what the maximum heap size should be.
Temporary spikes in memory usage should be handled by increasing CPU costs, not by aborting.
Why is latency so important?
- Latency is cumulative.
Reference:The Tail at Scale - Jeff Dean
Fight the tyranny of the 9s(99.99%) with redundancy
But~Redundancy wasn't going to scale, redundancy costs a lot.
Abbr
- service level objective (SLO)
- Stop-the-world (STW)
Tri-color concurrent algorithm
Size segregated spans been introduced and it has some other advantages
Reference:[golang] Golang's memory management - Eben Freeman [note]
- Low fragmentation
- Internal structures
- Speed
Object's metadata
We needed to have some information about the objects since we didn't have headers.Mark bits are kept on the side and used for marking as well as allocation.
Each word has 2 bits associated with it to tell you if it was a scalar or a pointer inside that word.
It also encoded whether there were more pointers in the object so we could stop scanning objects sooner than later.
We also had an extra bit encoding that we could use as an extra mark bit or to do other debugging things.
This design is valuable for getting this stuff running and finding bugs.
Write barriers
The write barrier is 'on' only during the GC.GC Pacer
When to best start a GC cycle.At a high level the Pacer stops the Goroutine, which is doing a lot of the allocation, and puts it to work doing marking.
If the system is in a steady state and not in a phase change, marking will end just about the time memory runs out.
The amount of work is proportional to the Goroutine's allocation. This speeds up the garbage collector while slowing down the mutator.
When all of this is done the Pacer takes what it has learnt from this GC cycle as well as previous ones and projects when to start the next GC.
Reference:
[Design Doc] Go 1.5 concurrent garbage collector pacing
[Proposal] Proposal: Separate soft and hard heap size goal
With the failure experience of which types of GC to adopt,
Escape analysis and Value-orientation succeed.
Card marking without a write barrier
Maintain a hash of mature pointers in each card. If pointers are written into a card, the hash will change and the card will be considered marked. This would trade the cost of write barrier off for cost of hashing.Today's modern architectures have AES (Advanced Encryption Standard) instructions.
One of those instructions can do encryption-grade hashing and with encryption-grade hashing we don't have to worry about collisions if we also follow standard encryption policies. So hashing is not going to cost us much but we have to load up what we are going to hash.
Reference:
https://mattwarren.org/2016/02/04/learning-how-garbage-collectors-work-part-1
http://factor-language.blogspot.com/2008/05/garbage-collection-throughput.html
https://blogs.msdn.microsoft.com/abhinaba/2009/03/02/back-to-basics-generational-garbage-collection/
https://stackoverflow.com/a/19155441/3850881
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.