When a Memory Pool Actually Helps in Go Logging
When you build a high-throughput log pipeline in Go, the garbage collector quickly becomes one of your biggest bottlenecks. Every log line means new allocations: buffers, temporary structs, parsed ...

Source: DEV Community
When you build a high-throughput log pipeline in Go, the garbage collector quickly becomes one of your biggest bottlenecks. Every log line means new allocations: buffers, temporary structs, parsed JSON trees, and so on. At some point, you start wondering: is it time to use a memory pool? In this post I’ll walk through a simple pattern using sync.Pool and explain when it is (and is not) a good idea for log pre-processing. The basic pattern For log processing, the most common thing to pool is a reusable byte buffer or struct used per log line. var logBufferPool = sync.Pool{ New: func() any { buf := make([]byte, 0, 64*1024) // 64KB buffer return buf }, } func handleLog(raw []byte) { // 1. Take a buffer from the pool buf := logBufferPool.Get().([]byte) // 2. Use it for parsing / masking / rewriting buf = buf[:0] // reset length, keep capacity buf = append(buf, raw) // do your processing on buf // 3. Return it to the pool buf = buf[:0] logBufferPool.Put(buf) } The key detail is buf = buf[:0