Apr 2, 2020

[Go] use sync.Pool with sense

https://github.com/golang/go/issues/23199

tl;dr
It's caller's responsibility to put back normalized size of instance back into
sync.Pool

Intermingling small and large size of instance put into the sync.Pool will
eventually getting all the instance inside the pool with large size.
(Read Reference 1)

thus either create a bunch of sync.Pool bucketize the items by size (aka. slab memory allocation) or only put back certain size of instance back to the pool.

Same idea as goroutin instance recycling.
Go runtime will not recycle any goroutin size > 2k.


Reference:
https://github.com/golang/go/issues/23199
http://vsdmars.blogspot.com/2019/01/split-stack-reading-notes-and-references.html

Reason:
the buffer inside the sync.Pool will remain LARGE and never shrink even serving small objects
code:
pool := sync.Pool{New: func() interface{} { return new(bytes.Buffer) }}

processRequest := func(size int) {
	b := pool.Get().(*bytes.Buffer)
	time.Sleep(500 * time.Millisecond) // Simulate processing time
	b.Grow(size)
	pool.Put(b)
	time.Sleep(1 * time.Millisecond) // Simulate idle time
}

// Simulate a set of initial large writes.
for i := 0; i < 10; i++ {
	go func() {
		processRequest(1 << 28) // 256MiB
	}()
}

time.Sleep(time.Second) // Let the initial set finish

// Simulate an un-ending series of small writes.
for i := 0; i < 10; i++ {
	go func() {
		for {
			processRequest(1 << 10) // 1KiB
		}
	}()
}

// Continually run a GC and track the allocated bytes.
var stats runtime.MemStats
for i := 0; ; i++ {
	runtime.ReadMemStats(&stats)
	fmt.Printf("Cycle %d: %dB\n", i, stats.Alloc)
	time.Sleep(time.Second)
	runtime.GC()
}

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.