JVM GC Tuning Guide: G1, ZGC, Shenandoah Explained with Real Trade-offs
- G1 is the right default for most workloads β do not adopt ZGC or Shenandoah unless your latency SLA explicitly demands sub-50ms pauses.
- The most effective GC tuning is reducing allocation rate at the application level. No collector flag compensates for excessive allocation.
- ZGC's generational mode (JDK 21+) is transformative β always enable it. Non-generational ZGC is a throughput disaster on allocation-heavy workloads.
G1 is the default JVM collector (Java 9+), designed to balance throughput and latency using region-based heap partitioning and concurrent marking.
ZGC targets sub-10ms pause times using fully concurrent phases β marking, relocation, and reference processing β at the cost of higher CPU and native memory overhead.
Shenandoah achieves similar low-pause goals using Brooks pointers and concurrent evacuation, with better memory efficiency than ZGC but added implementation complexity and narrower platform support.
Key trade-off: All three collectors trade throughput for latency guarantees in different ways.
Biggest mistake: Choosing a collector without matching it to your workload profile β this leads to either unnecessary latency (wrong for low-latency systems) or wasted CPU (over-engineering).
Important: GC tuning is not portable β flags and behavior differ significantly across G1, ZGC, and Shenandoah.
Application unresponsive, suspected full GC
jcmd <pid> GC.heap_infojstat -gcutil <pid> 1000 10High CPU with low application throughput
top -H -p <pid> | grep -E 'VM Thread|GC Thread'jcmd <pid> VM.flags | grep -i concLatency spikes at regular intervals
jstat -gcutil <pid> 500 20grep 'Pause' gc.log | tail -20OOM kill by container orchestrator (k8s)
kubectl describe pod <pod> | grep -A5 'OOMKilled'jcmd <pid> VM.native_memory summaryAllocation failure in logs, to-space exhausted
grep 'to-space exhausted' gc.log | wc -lgrep 'humongous' gc.log | tail -20Production Incident
Production Debug GuideFollow this path when GC is suspected as the root cause of latency or availability issues.
Garbage collection tuning is the single most impactful JVM performance lever after algorithm design. Yet most teams default to G1 without understanding whether their workload demands ZGC or Shenandoah. The wrong collector choice manifests as either unexplained latency spikes or wasted CPU capacity β both invisible until they compound under production load.
This guide covers G1, ZGC, and Shenandoah from a production operator's perspective. Each section includes failure scenarios, tuning knobs, and trade-off analysis grounded in real production incidents. No toy examples β every configuration reflects what actually breaks in the field.
The core misconception: GC tuning is about eliminating pauses. It is not. GC tuning is about aligning pause behavior with your application's latency budget and throughput requirements. A 200ms pause is catastrophic for a trading engine and irrelevant for a batch ETL job. Context determines correctness.
G1GC β The Workhorse Collector
G1 (Garbage-First) has been the default JVM collector since Java 9. It divides the heap into equal-sized regions (1MB to 32MB) and prioritizes collecting regions with the most garbage β hence 'garbage-first'. G1 maintains a remembered set per region tracking incoming references, enabling independent region collection without scanning the entire heap.
G1 operates in young-only and mixed collection cycles. Young GC collects survivor and eden regions. When the heap occupancy exceeds the Initiating Heap Occupancy Percent (IHOP), G1 triggers a concurrent marking cycle. After marking completes, subsequent mixed GCs collect both young and old regions identified as mostly garbage.
The critical production insight: G1's pause time is primarily driven by the number of regions it must collect in a single pause, not heap size. A 64GB heap with aggressive evacuation can pause longer than a 4GB heap with conservative settings. This is the opposite of what most engineers assume.
package io.thecodeforge.gc; import java.util.concurrent.ConcurrentHashMap; import java.util.Map; /** * Demonstrates allocation patterns that stress G1 differently. * * Key insight: G1 humongous objects (>50% region size) bypass normal * allocation and can trigger to-space exhausted failures. */ public class G1TuningExample { // Cache with large value objects β common source of humongous allocations private final Map<String, byte[]> payloadCache = new ConcurrentHashMap<>(); /** * BAD: Allocates objects that may exceed humongous threshold. * With default 1MB region size, objects > 512KB are humongous. * With 32MB regions, threshold is 16MB β much safer for large payloads. * * Tuning: -XX:G1HeapRegionSize=32M * -XX:G1ReservePercent=15 * -XX:InitiatingHeapOccupancyPercent=35 */ public void cacheLargePayload(String key, int sizeBytes) { byte[] payload = new byte[sizeBytes]; // Simulate deserialization fill for (int i = 0; i < Math.min(sizeBytes, 1024); i++) { payload[i] = (byte) (i & 0xFF); } payloadCache.put(key, payload); } /** * BETTER: Chunk large payloads to stay below humongous threshold. * Each chunk is independently collectible as a regular object. */ public void cacheChunkedPayload(String key, byte[] fullPayload) { int chunkSize = 256 * 1024; // 256KB chunks β well below humongous threshold int numChunks = (fullPayload.length + chunkSize - 1) / chunkSize; for (int i = 0; i < numChunks; i++) { int offset = i * chunkSize; int length = Math.min(chunkSize, fullPayload.length - offset); byte[] chunk = new byte[length]; System.arraycopy(fullPayload, offset, chunk, 0, length); payloadCache.put(key + ":chunk:" + i, chunk); } } /** * Production G1 flags for a 16GB heap with mixed allocation profile: * * -XX:+UseG1GC * -Xms16g -Xmx16g * -XX:G1HeapRegionSize=16m * -XX:MaxGCPauseMillis=200 * -XX:G1ReservePercent=15 * -XX:InitiatingHeapOccupancyPercent=35 * -XX:G1MixedGCCountTarget=8 * -XX:G1MixedGCLiveThresholdPercent=85 * -Xlog:gc*,gc+humongous=debug:file=/var/log/gc.log:time,uptime,level,tags */ }
- Pause time scales with live data in collected regions, not total heap size
- Humongous objects break this model β they span multiple regions and cannot be partially evacuated
- Remembered sets consume 5-10% of heap as off-heap overhead β budget for this when setting -Xmx
- To-space exhausted means G1 literally ran out of regions to evacuate into β this is a full GC fallback
ZGC β Sub-Millisecond Pause Collector
ZGC (Z Garbage Collector) was introduced as an experimental feature in JDK 11 and became production-ready in JDK 15. Its defining characteristic: pause times stay below 10ms regardless of heap size β tested up to 16TB heaps. ZGC achieves this through concurrent everything: marking, relocation, and reference processing all happen while application threads run.
ZGC uses load barriers (not write barriers) with colored pointers. Every object reference in ZGC carries metadata bits (marked0, marked1, remap, finalize) embedded in the pointer itself. The load barrier intercepts every object access to check if the reference needs remapping. This is the fundamental trade-off: ZGC replaces long GC pauses with per-access overhead on every object load.
As of JDK 21, ZGC supports generational mode (-XX:+ZGenerational) which dramatically improves throughput by focusing collection on young objects. Non-generational ZGC collects the entire heap every cycle, which limits throughput on allocation-heavy workloads.
package io.thecodeforge.gc; import java.util.concurrent.atomic.AtomicLong; /** * ZGC-specific considerations for production workloads. * * ZGC trades per-access overhead for near-zero pause times. * The load barrier adds ~4-8% overhead on pointer-heavy workloads. */ public class ZGCTuningExample { private final AtomicLong allocationCounter = new AtomicLong(0); /** * ZGC is sensitive to allocation rate, not allocation size. * A workload allocating many small objects stresses ZGC more than * fewer large objects because load barriers fire more frequently. * * Monitor: jstat -gcutil <pid> 1000 * Watch: ZGC cycle count and allocation rate. */ public void highFrequencyAllocation() { // 1000 allocations per call β each triggers load barrier overhead // on subsequent reads. ZGC handles this well with generational mode. for (int i = 0; i < 1000; i++) { Object temp = new Object(); allocationCounter.incrementAndGet(); // temp is immediately eligible for collection } } /** * Production ZGC flags for a 32GB heap, latency-sensitive service: * * -XX:+UseZGC * -XX:+ZGenerational // JDK 21+ β critical for throughput * -Xms32g -Xmx32g // Always set Xms=Xmx for ZGC * -XX:SoftMaxHeapSize=28g // ZGC-specific: target heap occupancy * -XX:ZCollectionInterval=5 // Suggest GC cycle every 5 seconds * -XX:ConcGCThreads=4 // Concurrent GC threads (default: auto) * -XX:ParallelGCThreads=8 // Parallel GC threads * -Xlog:gc*:file=/var/log/zgc.log:time,uptime,level,tags * * CRITICAL: ZGC uses ~20% native memory overhead beyond -Xmx. * Container memory limit must be heap * 1.25 minimum. */ /** * ZGC SoftMaxHeapSize is unique β it tells ZGC to try to stay below * this threshold but can exceed it under allocation pressure. * * Use case: Set heap to 32GB, SoftMaxHeapSize to 28GB. * ZGC will trigger cycles aggressively to stay under 28GB. * Only allocates into the remaining 4GB under extreme pressure. * This prevents container OOM kills while keeping a safety margin. */ public void demonstrateSoftMaxHeapConcept() { // With SoftMaxHeapSize=28g and Xmx=32g: // - ZGC targets 28GB occupancy // - If allocation pressure pushes past 28GB, ZGC cycles more aggressively // - Only if allocation rate exceeds reclamation rate does it use 28-32GB // - If it hits 32GB, allocation stalls (not OOM, but backpressure) // // This is fundamentally different from G1's IHOP which just triggers // a marking cycle. SoftMaxHeapSize is a continuous pressure signal. } }
- Pause times are truly independent of heap size and live data size β tested to 16TB
- The trade-off is per-access CPU overhead, not pause time β you pay on every object load, not during GC
- ZGC needs 4-byte aligned addresses to use pointer bits β this constrains compressed oops behavior
- Generational ZGC (JDK 21+) reduces this overhead dramatically by focusing on young generation
- ZGC cannot use compressed object pointers (UseCompressedOops) β this increases memory usage by ~15% on heaps < 32GB
Shenandoah β Red Hat's Low-Pause Contender
Shenandoah is Red Hat's concurrent compacting collector, available as a production feature since JDK 12 (backported to JDK 8 and 11 via Shenandoah project). It achieves low pause times through concurrent evacuation β moving live objects while application threads run β using Brooks pointers (an indirection layer on every object).
Shenandoah's architecture differs from ZGC in a critical way: it uses Brooks pointers (every object has a forwarding pointer field) instead of colored pointers. This means Shenandoah does not require specific pointer bit layouts and works with compressed oops, reducing memory overhead compared to ZGC on heaps under 32GB.
Shenandoah operates in three concurrent phases: concurrent mark (identify live objects), concurrent evacuate (move live objects out of garbage-heavy regions), and concurrent update-refs (fix pointers to moved objects). The initial mark and final mark phases are short stop-the-world pauses, typically under 10ms.
package io.thecodeforge.gc; import java.util.ArrayList; import java.util.List; /** * Shenandoah-specific production considerations. * * Shenandoah uses Brooks pointers β every object has an extra forwarding * pointer field. This adds 8 bytes per object on 64-bit systems. * For applications with millions of small objects, this overhead is measurable. */ public class ShenandoahTuningExample { /** * Brooks pointer overhead calculation: * * Object with 2 fields (16 bytes header + 16 bytes data = 32 bytes) * + 8 bytes Brooks pointer = 40 bytes per object * Overhead: 25% increase per object * * For 10 million small objects: ~80MB additional memory * For 100 million small objects: ~800MB additional memory * * Compare to ZGC: no per-object overhead, but ~15% heap overhead * from multi-mapping and compressed oops unavailability. */ public long estimateBrooksOverhead(int objectCount) { return (long) objectCount * 8; // 8 bytes per Brooks pointer } /** * Production Shenandoah flags for a 16GB heap: * * -XX:+UseShenandoahGC * -Xms16g -Xmx16g * -XX:ShenandoahGCHeuristics=adaptive // or 'compact', 'static' * -XX:ShenandoahAllocationThreshold=10 // trigger cycle after 10% allocation * -XX:+UseCompressedOops // works with Shenandoah (unlike ZGC) * -XX:+UseCompressedClassPointers * -Xlog:gc*:file=/var/log/shenandoah.log:time,uptime,level,tags * * Heuristic modes: * - adaptive: (default) adjusts cycle frequency based on allocation rate * - compact: more aggressive collection, lower heap usage, slightly higher CPU * - static: fixed cycle interval, predictable behavior for benchmarking */ /** * Shenandoah pacing is a unique feature that backpressures allocation * threads when the collector falls behind. * * Unlike ZGC which stalls allocation entirely, Shenandoah slows down * allocating threads proportionally. This creates smoother latency * degradation under load rather than sharp spikes. * * Monitor: -Xlog:gc+stats to see pacing delays. * If pacing delays exceed 1ms consistently, allocation rate is too high * for the current heap size and ConcGCThreads setting. */ public void demonstratePacingBehavior() { List<byte[]> allocations = new ArrayList<>(); // Under heavy allocation, Shenandoah will pace this loop // by adding small delays to each allocation. // The delay is proportional to how far behind the collector is. // // This is different from ZGC's allocation stall which is a hard stop. // Shenandoah's pacing creates gradual degradation. for (int i = 0; i < 100_000; i++) { allocations.add(new byte[1024]); // 1KB each } } }
- No load barrier overhead β Shenandoah uses store barriers instead, which fire less frequently
- Works with compressed oops β saves ~15% memory compared to ZGC on heaps under 32GB
- Per-object overhead of 8 bytes β significant for workloads with many small objects
- Concurrent evacuation means compaction happens while application runs β less fragmentation than G1
- Pacing mechanism creates graceful degradation instead of hard allocation stalls
Comparing the Three Collectors β Real Trade-offs
The collector choice is not about which is 'best' β it is about matching the collector's trade-off profile to your workload's requirements. Every collector sacrifices something: G1 sacrifices pause-time predictability for throughput. ZGC sacrifices throughput and memory for near-zero pauses. Shenandoah sacrifices per-object memory for balanced pause-throughput behavior.
The following comparison reflects production reality, not benchmark lab conditions. Real workloads have allocation spikes, mixed object lifetimes, and container constraints that change the calculus entirely.
package io.thecodeforge.gc; /** * Decision framework for collector selection based on production constraints. * * No collector is universally superior. This guide maps workload * characteristics to the appropriate collector. */ public class CollectorSelectionGuide { /** * SELECTION MATRIX: * * Workload Profile | Recommended Collector | Reason * --------------------------|----------------------|------------------ * General web service | G1 | Good balance, mature ecosystem * Sub-10ms latency SLA | ZGC (generational) | Hard pause guarantee * Sub-10ms + <32GB heap | Shenandoah | Compressed oops, pacing * Batch processing | G1 or Parallel | Throughput over latency * Large heap (>64GB) | ZGC | Pause times scale with heap * Small heap (<4GB) | G1 | Overhead of ZGC/Shenandoah unjustified * Container-constrained | G1 or Shenandoah | Lower native memory overhead * High allocation rate | ZGC (generational) | Generational mode handles young gen * Mixed object lifetimes | G1 | Region-based collection handles this well * Many small objects | ZGC | No per-object overhead * Many large objects | G1 (large regions) | Humongous object handling */ /** * NATIVE MEMORY OVERHEAD COMPARISON (approximate, production values): * * G1: * - Remembered sets: 5-10% of heap * - Card table: ~0.2% of heap * - Total native overhead: ~10-15% of heap * * ZGC: * - Multi-mapping: ~15-20% of heap (virtual address space) * - Page table overhead: variable * - No compressed oops: +15% heap usage for <32GB heaps * - Total native overhead: ~20-25% of heap * * Shenandoah: * - Brooks pointers: 8 bytes per object * - Remembered sets: ~5% of heap * - Compressed oops: supported (saves ~15% vs ZGC) * - Total native overhead: ~10-15% of heap + per-object cost */ /** * CONTAINER MEMORY FORMULA: * * G1: container_limit = Xmx * 1.15 * ZGC: container_limit = Xmx * 1.25 * Shenandoah: container_limit = Xmx * 1.15 + (object_count * 8) * * If container limit is fixed, work backwards: * G1: Xmx = container_limit / 1.15 * ZGC: Xmx = container_limit / 1.25 * Shenandoah: Xmx = (container_limit - object_count * 8) / 1.15 */ }
- G1: Maximizes throughput and memory efficiency. Sacrifices pause-time predictability below ~50ms.
- ZGC: Maximizes pause-time guarantee and memory compaction. Sacrifices throughput (10-15%) and memory (compressed oops unavailable).
- Shenandoah: Maximizes pause-time guarantee and throughput balance. Sacrifices per-object memory (8 bytes Brooks pointer).
- No tuning can break this triangle β you are choosing which axis to sacrifice, not eliminating trade-offs.
Production Tuning Patterns That Actually Work
Most GC tuning guides present flags in isolation. Production tuning requires understanding how flags interact and which signals indicate which adjustments. These patterns are derived from incidents across payment processing, real-time bidding, and high-frequency trading systems.
package io.thecodeforge.gc; /** * Production tuning patterns organized by problem type. * Each pattern addresses a specific failure mode. */ public class ProductionTuningPatterns { /** * PATTERN 1: Allocation Rate Spike Handler * * Problem: Bursts of allocation cause GC to fall behind. * Symptom: Increasing pause times during traffic spikes. * * G1 Fix: * -XX:InitiatingHeapOccupancyPercent=30 // start marking earlier * -XX:G1ReservePercent=15 // more evacuation buffer * -XX:G1RSetUpdatingPauseTimePercent=5 // less RSet work in pause * * ZGC Fix: * -XX:SoftMaxHeapSize=<70% of Xmx> // trigger cycles earlier * -XX:ConcGCThreads=<cores/4> // more concurrent threads * -XX:+ZGenerational // focus on young objects * * Shenandoah Fix: * -XX:ShenandoahAllocationThreshold=5 // cycle after 5% allocation * -XX:ConcGCThreads=<cores/4> // more concurrent threads * -XX:ShenandoahGCHeuristics=compact // aggressive reclamation */ /** * PATTERN 2: Long-Lived Cache Optimization * * Problem: Large caches create a big live data set that GC must scan * but never reclaim. This wastes GC cycles and increases pause times. * * Solution: Use off-heap caching (Caffeine with weakValues, or * Chronicle Map) to move cached data outside GC's jurisdiction. * * If on-heap caching is required: * G1: -XX:G1MixedGCLiveThresholdPercent=90 // skip regions with >90% live * ZGC: Already handles this well with concurrent marking * Shen: -XX:ShenandoahGCHeuristics=adaptive // skip mostly-live regions */ /** * PATTERN 3: Container-Aware Sizing * * Problem: JVM heap + native memory exceeds container limit. * Symptom: OOM kill with no heap exhaustion in metrics. * * Rule of thumb for container memory limits: * - Set Xmx = container_limit * 0.80 for G1 * - Set Xmx = container_limit * 0.70 for ZGC * - Set Xmx = container_limit * 0.80 for Shenandoah * * Remaining memory covers: * - Thread stacks (1MB per thread, ~500 threads = 500MB) * - Metaspace (class metadata, usually 100-300MB) * - Direct byte buffers (monitor with MBean) * - GC internal structures (remembered sets, card tables) * - JNI native memory */ /** * PATTERN 4: Warm-Up Tuning for Low-Latency Services * * Problem: First requests after deployment have high latency due to * JIT compilation, class loading, and initial GC cycles. * * Solution: * 1. Use -XX:+AlwaysPreTouch to pre-zero heap pages at startup * 2. Implement warm-up traffic routing (load balancer weight ramp) * 3. Run synthetic allocation load for 60s before accepting traffic * 4. For ZGC: first 2-3 cycles are slower as JIT optimizes load barriers * * io.thecodeforge.gc.WarmUpManager can handle synthetic warm-up. */ }
- < 500 MB/sec: Any collector handles this comfortably with default settings
- 500 MB/sec - 2 GB/sec: G1 works with tuning. ZGC generational mode handles well.
- 2-5 GB/sec: Requires aggressive tuning or allocation reduction. ZGC generational is best.
- > 5 GB/sec: Consider object pooling, arena allocation, or off-heap strategies. GC alone cannot keep up.
- Measure with: jstat -gc <pid> 1000 β calculate (bytes allocated between samples) / interval
Monitoring and Observability for GC Health
GC tuning without observability is blind optimization. Every production JVM must emit GC metrics that allow correlation with application latency and throughput. The minimum viable GC observability setup includes pause time histograms, allocation rate tracking, and GC cycle phase breakdowns.
package io.thecodeforge.gc; import java.lang.management.GarbageCollectorMXBean; import java.lang.management.ManagementFactory; import java.lang.management.MemoryPoolMXBean; import java.lang.management.MemoryUsage; import java.util.List; /** * Production GC metrics exporter for Prometheus/Micrometer integration. * * These metrics enable correlation of GC behavior with application * latency and throughput in your observability stack. */ public class GCMetricsExporter { /** * Essential GC metrics every production service must emit: * * 1. jvm_gc_pause_seconds{collector, action} * - Histogram of GC pause durations * - Alert on p99 > SLA threshold * * 2. jvm_gc_allocation_rate_mbps * - Calculated from heap usage delta between GC cycles * - Leading indicator of GC pressure * * 3. jvm_gc_live_data_size_bytes * - Size of live objects after major collection * - Growing trend = memory leak * * 4. jvm_gc_memory_promoted_bytes_total * - Bytes promoted from young to old generation * - High rate = short-lived objects escaping young gen * * 5. jvm_memory_used_bytes{area, pool} * - Per-memory-pool usage * - Alert on old gen > 80% sustained */ public void exportGCMetrics() { List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans(); for (GarbageCollectorMXBean gcBean : gcBeans) { String collectorName = gcBean.getName(); long collectionCount = gcBean.getCollectionCount(); long collectionTimeMs = gcBean.getCollectionTime(); // Export as: // jvm_gc_collection_count_total{collector="<name>"} <count> // jvm_gc_collection_time_seconds_total{collector="<name>"} <time_sec> System.out.printf("Collector: %s, Count: %d, Time: %dms%n", collectorName, collectionCount, collectionTimeMs); } // Memory pool monitoring for heap pressure detection List<MemoryPoolMXBean> memoryBeans = ManagementFactory.getMemoryPoolMXBeans(); for (MemoryPoolMXBean pool : memoryBeans) { MemoryUsage usage = pool.getUsage(); long usedMB = usage.getUsed() / (1024 * 1024); long maxMB = usage.getMax() / (1024 * 1024); double utilization = (double) usage.getUsed() / usage.getMax() * 100; // Alert if old gen utilization > 80% for extended period System.out.printf("Pool: %s, Used: %dMB, Max: %dMB, Util: %.1f%%%n", pool.getName(), usedMB, maxMB, utilization); } } /** * GC log analysis commands for production triage: * * 1. Pause time distribution: * grep 'Pause' gc.log | awk '{print $NF}' | sort -n | awk ' * {a[NR]=$1} END {print "p50:",a[int(NR*0.5)],"p99:",a[int(NR*0.99)],"max:",a[NR]}' * * 2. GC frequency over time: * grep '\[gc.*\]' gc.log | awk '{print $1}' | cut -d'T' -f1 | uniq -c * * 3. Humongous allocation rate (G1 specific): * grep 'humongous' gc.log | wc -l * grep 'humongous' gc.log | awk '{sum+=$NF} END {print sum/NR, "bytes avg"}' * * 4. ZGC cycle time distribution: * grep 'Garbage Collection.*GC\(' gc.log | grep -oP '\d+\.\d+ms' | sort -n * * 5. Shenandoah pacing delays: * grep 'Pacing' gc.log | awk '{print $NF}' | sort -n | tail -20 */ }
- Allocation rate trend: A steadily increasing allocation rate (week over week) means you will hit GC capacity limits. Fix before it becomes an incident.
- Live data size trend: A growing live data set after full GC means a memory leak. GC cannot reclaim it β the application is retaining references.
- Pause time p99 trend: If p99 pause time is growing over days, the heap is fragmenting or the live data set is growing. Investigate before it violates SLA.
| Characteristic | G1GC | ZGC | Shenandoah |
|---|---|---|---|
| JDK availability | JDK 7+ (default JDK 9+) | JDK 11+ (prod JDK 15+) | JDK 8+ (backports), prod JDK 12+ |
| Typical pause time | 50-200ms (tunable to ~50ms) | < 10ms (independent of heap) | < 10ms (independent of heap) |
| Throughput overhead | Baseline (lowest) | 10-15% vs G1 | 5-10% vs G1 |
| Native memory overhead | ~10-15% of heap | ~20-25% of heap | ~10-15% of heap + 8 bytes/object |
| Compressed oops | Supported | Not supported | Supported |
| Generational collection | Yes (built-in) | Yes (JDK 21+ with -XX:+ZGenerational) | No (full-heap concurrent) |
| Max tested heap | Terabytes | 16TB | Terabytes |
| Humongous objects | Problematic β requires tuning | No concept β handles large objects well | No concept β handles large objects well |
| Container friendliness | Good β predictable overhead | Poor β high native memory | Good β supports compressed oops |
| Allocation stall behavior | Full GC fallback (catastrophic) | Hard stall (backpressure) | Soft pacing (gradual degradation) |
| Tuning complexity | Moderate β many flags | Low β fewer flags, self-tuning | Low-moderate β heuristic modes |
| Community/ecosystem maturity | Very mature β default collector | Mature β growing adoption | Moderate β Red Hat backed |
| Best use case | General purpose, cost-sensitive | Ultra-low latency, large heaps | Low latency, memory-efficient, moderate heaps |
π― Key Takeaways
- G1 is the right default for most workloads β do not adopt ZGC or Shenandoah unless your latency SLA explicitly demands sub-50ms pauses.
- The most effective GC tuning is reducing allocation rate at the application level. No collector flag compensates for excessive allocation.
- ZGC's generational mode (JDK 21+) is transformative β always enable it. Non-generational ZGC is a throughput disaster on allocation-heavy workloads.
- Container memory must account for native overhead: 15% for G1/Shenandoah, 25% for ZGC. OOM kills from native memory exhaustion are the #1 containerized JVM incident.
- GC observability is non-negotiable. Track allocation rate, pause time p99, and live data size trend. These three metrics predict 90% of GC incidents.
- Humongous objects are G1's hidden failure mode. Monitor them proactively. Increase region size or chunk objects at the application level.
- Shenandoah's pacing creates smoother degradation than ZGC's allocation stalls. Choose Shenandoah for workloads where gradual degradation is preferred over hard backpressure.
- Never tune GC flags without detailed GC logging enabled. Default logging is insufficient for production diagnosis.
β Common Mistakes to Avoid
- βSetting -Xmx without accounting for native memory overhead β JVM heap is not total JVM memory. GC internal structures (remembered sets, card tables, ZGC multi-mapping), thread stacks, metaspace, and direct byte buffers all consume off-heap memory. Setting container memory limit equal to -Xmx guarantees OOM kills. Budget 15-25% extra depending on collector. β Fix: Use container_limit = Xmx 1.15 (G1/Shenandoah) or Xmx 1.25 (ZGC). Monitor with -XX:NativeMemoryTracking=detail.
- βChoosing ZGC because 'lower pauses are always better' β ZGC's 10-15% throughput overhead and 25% native memory overhead are real costs. If your latency SLA is 200ms, G1 meets that comfortably. The throughput and memory savings with G1 translate to fewer pods and lower infrastructure cost. β Fix: Only adopt ZGC when your measured p99 latency with tuned G1 exceeds your SLA. Profile first, then decide.
- βTuning GC flags without enabling detailed GC logging β Default GC logging is insufficient for production tuning. Without -Xlog:gc,gc+phases=debug, you cannot see pause time breakdowns, humongous allocation rates, or evacuation failures. You are flying blind. β Fix: Always enable: -Xlog:gc,gc+phases=debug:file=gc.log:time,uptime,level,tags. Rotate logs. Ship to observability platform.
- βUsing the same GC flags across all services β Each service has a different allocation profile, heap size, and latency requirement. Flags tuned for a low-allocation REST API will fail catastrophically on a high-throughput stream processor. Tune per-service based on actual workload characteristics. β Fix: Profile each service independently. Start with defaults. Adjust based on measured allocation rate, pause times, and memory utilization.
- βIgnoring humongous allocations in G1 β Humongous objects (>50% of G1 region size) bypass normal region allocation and can trigger to-space exhausted failures. This is the #1 cause of unexpected full GC in G1-tuned services. β Fix: Monitor with -Xlog:gc+humongous=debug. Increase -XX:G1HeapRegionSize to reduce humongous threshold. Chunk large objects at the application level.
- βNot setting Xms equal to Xmx for ZGC and Shenandoah β ZGC and Shenandoah perform best with a fixed heap size. Dynamic heap resizing adds unnecessary complexity and can cause unpredictable behavior during resize events. G1 tolerates Xms != Xmx better, but fixed sizing is still recommended. β Fix: Always set -Xms equal to -Xmx for production workloads with ZGC and Shenandoah.
- βMeasuring GC health by pause time alone β A collector with 5ms pauses that runs 1000 times per minute spends more time in GC than one with 50ms pauses that runs 10 times per minute. GC overhead = pause_time * frequency. Always measure both. β Fix: Track GC overhead percentage: total GC time / total elapsed time. Alert if > 5% for latency-sensitive services, > 10% for throughput-oriented services.
- βRunning non-generational ZGC in production on JDK 21+ β Non-generational ZGC collects the entire heap every cycle. This is a throughput disaster on allocation-heavy workloads. Generational ZGC (JDK 21+) focuses on young objects and is dramatically more efficient. β Fix: Always enable -XX:+ZGenerational on JDK 21+ production deployments. There is almost no reason to use non-generational ZGC on JDK 21+.
Interview Questions on This Topic
- QExplain the fundamental trade-off between G1, ZGC, and Shenandoah. Why can't one collector optimize all three axes (pause time, throughput, memory efficiency)?Each collector makes a different bet on which two axes to optimize. G1 optimizes throughput and memory efficiency by accepting longer pauses (region-based evacuation with remembered sets). ZGC optimizes pause time and compaction by accepting throughput overhead (load barriers on every object access) and memory overhead (no compressed oops, multi-mapping). Shenandoah optimizes pause time and throughput balance by accepting per-object memory overhead (8-byte Brooks pointers). The fundamental constraint is Amdahl's Law applied to concurrent collection β doing more work concurrently requires more coordination overhead, which either costs CPU (throughput) or memory (metadata structures).
- QYour payment service is running G1 with 16GB heap. During peak traffic, you see 'to-space exhausted' in GC logs followed by a 12-second full GC. What is happening and how do you fix it?To-space exhausted means G1 cannot find free regions to evacuate live objects into. Common causes: (1) Humongous objects consuming free regions faster than concurrent marking can reclaim them β check with -Xlog:gc+humongous=debug and increase region size. (2) IHOP is miscalibrated β concurrent marking starts too late, so mixed GCs cannot free enough regions before young GC needs them. (3) Allocation rate exceeds reclamation capacity. Fix: increase -XX:G1ReservePercent to 15, set -XX:G1HeapRegionSize to reduce humongous threshold, lower -XX:InitiatingHeapOccupancyPercent to 30-35, and investigate allocation patterns at the application level. Doubling heap size without fixing the allocation pattern just delays the same failure.
- QYou are migrating a service from G1 to ZGC. After migration, p99 latency improved from 120ms to 8ms, but throughput dropped 15% and the service needs 25% more memory in Kubernetes. The team wants to revert. How do you evaluate this decision?First, determine if the 15% throughput drop is acceptable given the latency improvement. If the SLA requires sub-10ms p99, ZGC is the only option and the throughput cost is the price of admission. If the SLA allows 120ms, reverting to G1 makes sense β you save infrastructure cost. For the memory issue, check if generational ZGC is enabled (-XX:+ZGenerational on JDK 21+), which reduces allocation overhead. Also verify container memory is set to Xmx * 1.25 to account for ZGC's native overhead. The decision should be driven by SLA requirements, not by which collector 'feels better'.
- QWhat is the difference between ZGC's allocation stall and Shenandoah's pacing mechanism? Which creates a better user experience under load?ZGC's allocation stall is a hard backpressure mechanism β when the collector falls behind, allocation threads are blocked until the collector catches up. This creates sharp latency spikes. Shenandoah's pacing adds proportional delays to allocations based on how far behind the collector is, creating a smooth degradation curve. For user experience, Shenandoah's pacing is generally better β users see gradually increasing latency rather than intermittent hard freezes. However, ZGC's stalls are more predictable for capacity planning because they create a clear signal that the service needs more heap or less allocation rate.
- QA service has a large on-heap cache holding 10GB of data with a 24-hour TTL. How does this affect each collector and what would you recommend?A 10GB long-lived cache creates a large live data set that all collectors must account for. G1: concurrent marking must scan 10GB of live data, increasing mark phase duration. Mixed GCs will skip regions that are mostly cache data (controlled by -XX:G1MixedGCLiveThresholdPercent). Recommend: use -XX:G1MixedGCLiveThresholdPercent=90 to skip nearly-full regions. ZGC: concurrent marking handles large live data sets well because marking is concurrent. No special tuning needed. Shenandoah: similar to ZGC, concurrent marking handles this well. For all collectors: the best optimization is moving the cache off-heap (Chronicle Map, Caffeine with weakValues, or Redis) to eliminate GC's need to scan the cache at all.
- QHow do you calculate the right container memory limit for a JVM running ZGC with a 32GB heap?ZGC's native memory overhead comes from: (1) Multi-mapping β ZGC maps the heap at multiple virtual addresses for colored pointer management, consuming ~15-20% of heap as virtual address space. (2) No compressed oops β on heaps that would normally use compressed oops, ZGC cannot, increasing pointer size from 4 to 8 bytes. (3) GC internal structures β page tables, marking stacks, relocation data structures. Formula: container_limit = Xmx 1.25 = 32GB 1.25 = 40GB. Additionally, account for thread stacks (500 threads * 1MB = 500MB), metaspace (~200MB), and direct byte buffers. Total recommended: 42-44GB container limit.
- QExplain why setting -XX:MaxGCPauseMillis=200 does not guarantee 200ms maximum pause with G1.MaxGCPauseMillis is a soft target that G1 uses to calibrate its region collection budget. G1 will attempt to collect fewer regions per pause to meet the target, but it cannot violate it under certain conditions: (1) If the live data set in a single region is large, evacuation takes time proportional to live data, not region count. (2) If humongous objects are present, they bypass the normal region collection model. (3) If to-space exhausted occurs, G1 falls back to a full GC which ignores MaxGCPauseMillis entirely. (4) Remark phase duration depends on reference processing workload, which is application-dependent. The flag influences G1's adaptive sizing decisions but does not impose a hard ceiling on any individual pause.
- QYou need to support both a latency-sensitive API (p99 < 20ms) and a batch processing job in the same JVM. Which collector do you choose and why?This is a trap question β the correct answer is usually 'separate them into different JVMs.' A single JVM cannot optimize for both latency sensitivity and throughput simultaneously. If forced to use one JVM, choose ZGC with generational mode. The batch job's allocations will be short-lived and collected efficiently in the young generation, while the API's request-scoped objects benefit from ZGC's sub-10ms pauses. G1 would struggle because the batch job's high allocation rate would trigger frequent mixed GCs that impact API latency. Shenandoah's pacing would slow down the batch job unnecessarily. But the real answer: isolate latency-sensitive and throughput-oriented workloads into separate JVMs with collector-specific tuning.
Frequently Asked Questions
Should I use G1, ZGC, or Shenandoah for my microservice?
Start with G1. If your p99 latency with tuned G1 exceeds your SLA, evaluate ZGC (for large heaps or ultra-low latency) or Shenandoah (for moderate heaps with memory constraints). Profile your actual workload β do not choose based on benchmarks or blog posts.
How much heap should I allocate in a Kubernetes container?
Set -Xmx to container_memory_limit / 1.15 for G1 or Shenandoah, or container_memory_limit / 1.25 for ZGC. Always set -Xms equal to -Xmx. The remaining memory covers thread stacks, metaspace, GC native structures, and direct byte buffers.
What is the difference between a young GC and a mixed GC in G1?
Young GC collects only eden and survivor regions β objects that have survived one or more young collections. Mixed GC collects both young regions and old regions identified as mostly garbage during the preceding concurrent marking cycle. Mixed GCs are how G1 reclaims old generation space without a full GC.
Can I switch collectors without restarting the JVM?
No. The garbage collector is selected at JVM startup and cannot be changed at runtime. This is a fundamental JVM design constraint. If you need to test a different collector, deploy a separate instance with the new collector flags.
How do I know if my allocation rate is too high?
Calculate allocation rate from jstat output: (bytes allocated between samples) / time interval. If allocation rate consistently exceeds 2GB/sec and you are seeing GC pressure (frequent cycles, growing pause times), the rate is too high for comfortable GC operation. Profile with async-profiler or JFR to identify allocation hotspots.
Does ZGC work on ARM processors?
Yes. ZGC supports x86_64, AArch64 (ARM 64-bit), and other 64-bit architectures as of JDK 17+. Earlier JDK versions had limited ARM support. Verify your specific JDK version's platform support matrix.
What causes 'allocation stall' in ZGC logs?
Allocation stall means ZGC cannot keep up with the allocation rate. The JVM temporarily blocks allocating threads while the collector catches up. This is ZGC's backpressure mechanism. Fix by: increasing -XX:ConcGCThreads, reducing allocation rate at the application level, or increasing heap size / lowering SoftMaxHeapSize to trigger cycles earlier.
Is Shenandoah production-ready?
Yes. Shenandoah has been production-ready since JDK 12 and is actively maintained by Red Hat. It is used in production at scale by Red Hat's own infrastructure and by customers running OpenJDK. It is less widely adopted than G1 or ZGC but is a mature, reliable collector.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.