Skip to content
Home System Design Latency and Throughput

Latency and Throughput

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Fundamentals → Topic 7 of 10
Latency and throughput explained — definitions, p50/p95/p99 percentiles, the latency-throughput tradeoff, Little's Law, and how to measure and improve both.
⚙️ Intermediate — basic System Design knowledge assumed
In this tutorial, you'll learn
Latency and throughput explained — definitions, p50/p95/p99 percentiles, the latency-throughput tradeoff, Little's Law, and how to measure and improve both.
  • Latency = time for one request. Throughput = requests per unit time.
  • Always measure p99 or p99.9, not average — averages hide the slow outliers.
  • At high utilisation, latency rises sharply — queues form when arrival rate approaches capacity.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Latency: measured in milliseconds. Always use percentiles (p50, p95, p99), not averages.
  • Throughput: measured in requests per second (RPS) or transactions per second (TPS).
  • Trade-off: as throughput approaches capacity, latency rises sharply (hockey stick curve).
  • Little's Law: L = lambda x W. Concurrency = throughput x latency.
  • p99 of 200ms means 1 in 100 requests takes 200ms or longer.
  • At 1M requests/day, that is 10,000 bad experiences daily.
  • Optimizing for throughput alone. High throughput with bad p99 latency means most users are fine but your highest-value users (complex queries, large payloads) suffer.
🚨 START HERE
Latency and Throughput Triage Commands
Rapid commands to isolate performance issues.
🟠High p99 latency with normal average.
Immediate ActionCheck connection pool saturation and GC pauses.
Commands
curl -s http://localhost:9090/api/v1/query?query=histogram_quantile(0.99,sum(rate(http_request_duration_seconds_bucket[5m]))by(le))
jstat -gcutil <pid> 1000 10
Fix NowIf GC pause > 100ms, tune heap or switch to low-latency GC (ZGC/Shenandoah). If pool is saturated, increase pool size or add fail-fast timeout.
🟠Throughput plateau despite low CPU.
Immediate ActionCheck for lock contention or single-threaded bottleneck.
Commands
jstack <pid> | grep -c 'BLOCKED\|WAITING'
pidstat -t -p <pid> 1 5
Fix NowIf many threads are BLOCKED, profile for lock contention. If one thread is at 100% CPU, you have a single-threaded bottleneck. Parallelize or shard.
🟠Periodic latency spikes.
Immediate ActionCheck GC logs and background job schedules.
Commands
jstat -gcutil <pid> 1000 5
grep -i 'pause\|gc' /var/log/app/gc.log | tail -20
Fix NowIf GC pauses correlate with spikes, increase heap or switch GC algorithm. If cron jobs correlate, move batch jobs to off-peak hours or rate-limit them.
🟠Latency increases with traffic (hockey stick).
Immediate ActionMeasure current utilization against capacity.
Commands
curl -s http://localhost:9090/api/v1/query?query=rate(http_requests_total[1m])
curl -s http://localhost:9090/api/v1/query?query=process_resident_memory_bytes/1024/1024
Fix NowIf utilization > 70%, you are on the steep curve. Scale horizontally. If utilization is low but latency is high, check downstream dependencies.
🟡Sudden throughput drop.
Immediate ActionCheck downstream dependency health and connection errors.
Commands
curl -s http://localhost:9090/api/v1/query?query=rate(http_client_requests_total{status=~"5.."}[1m])
netstat -an | grep -c TIME_WAIT
Fix NowIf 5xx errors spiked, a downstream dependency is failing. If TIME_WAIT connections are high, you have port exhaustion. Increase ephemeral port range or enable connection reuse.
Production Incidentp99 Latency Spike from Database Connection Pool Exhaustion Under LoadA payment API maintained 15ms average latency at 500 RPS. During a flash sale, traffic spiked to 2,000 RPS. Average latency stayed at 22ms, but p99 spiked from 45ms to 3,200ms. 1% of payment requests timed out, causing $47,000 in lost transactions over 20 minutes.
SymptomGrafana dashboard showed average latency stable at 22ms. SLO dashboard showed p99 breach: 3,200ms vs 200ms target. Customer support received timeout complaints. Database connection pool metrics showed all 50 connections saturated with 200 requests waiting in queue.
AssumptionThe database was overloaded and needed more read replicas.
Root causeThe application used a fixed-size database connection pool of 50 connections. At 500 RPS with 10ms average query time, Little's Law predicted 5 concurrent connections (500 x 0.010 = 5). At 2,000 RPS, the same formula predicted 20 connections — well within the pool. However, 1% of queries were slow (500ms due to full table scans on a specific payment type). These slow queries held connections for 50x longer than normal. At 2,000 RPS, 20 slow queries per second held connections for 500ms each, consuming 10 connections continuously. The remaining 40 connections served 1,980 normal requests, creating contention. Requests that could not acquire a connection waited in the pool queue, adding 500ms+ to their latency. The average was unaffected because 99% of requests were fast, but the p99 was dominated by the queue wait Fixed the full table scan by adding an index on the payment_type column, reducing slow query latency from 500ms to 5ms. 2. Increased the connection pool from 50 to 100 with a 3-second acquire timeout (fail fast instead of queue). 3. Added a circuit breaker on the slow query path to shed load gracefully when the pool is saturated. 4. Added p99 and p99.9 latency alerts (not just average) to catch tail latency issues before they breach SLOs. 5. Implemented connection pool metrics: active connections, idle connections, queue depth, and queue wait time.
Key Lesson
Average latency hides tail latency. Always monitor p99 and p99.9, not just average.Little's Law predicts resource needs. If slow queries hold connections 50x longer, they consume 50x more pool capacity per request.Connection pools are queues. When the pool is full, requests wait. Queue wait time dominates p99 latency.Fail fast is better than queue forever. Set connection acquire timeouts to shed load rather than accumulate queue depth.Fix the slow queries first. No amount of pool sizing fixes a full table scan.
Production Debug GuideSymptom-first investigation path for performance degradation.
Average latency is fine but p99 is terrible.You have a tail latency problem. Check for: connection pool saturation, GC pauses, slow queries on specific code paths, lock contention, or noisy neighbors. Profile the slow requests specifically — average metrics hide them.
Latency increases linearly with traffic.You are on the steep part of the latency-throughput curve. System is near capacity. Check CPU utilization, connection pool queue depth, and thread pool saturation. Scale horizontally or reduce per-request work.
Throughput plateaus despite adding resources.You have a bottleneck that is not CPU or memory. Check for: database lock contention, single-threaded processing, serialization overhead, or a downstream dependency with fixed capacity.
Latency spikes every few minutes in a periodic pattern.Likely GC pauses, background compaction (LSM trees), or periodic batch jobs competing for resources. Check JVM GC logs, database compaction metrics, and cron job schedules.
Throughput drops suddenly without traffic change.Check for: connection pool exhaustion, DNS resolution failures, downstream dependency health, disk I/O saturation, or network partition. Use distributed tracing to find the slow hop.
p50 is fast but p99.9 is 100x slower.Extreme tail latency. Check for: retry storms (retries amplify load), cache stampedes (all requests miss cache simultaneously time.

Every production system is ultimately measured by two numbers: how fast it responds (latency) and how much it can handle (throughput). SLAs are written in percentiles — p99 latency under 200ms, throughput above 10,000 RPS. Getting either wrong means either angry users or an over-provisioned bill.

The counterintuitive part: optimizing purely for throughput often destroys latency. A system processing 1,000 RPS might have 5ms average latency, but as queues fill under load, that average hides the 10% of users seeing 500ms. Understanding the latency-throughput tradeoff curve and Little's Law is the foundation for capacity planning, SLO design, and performance debugging.

This is not a textbook definition. It covers how to measure latency correctly (percentiles, not averages), how the latency-throughput curve behaves near capacity, how Little's Law connects concurrency to resource sizing, and the production patterns that separate systems that scale gracefully from those that collapse under load.

Percentiles — Why Averages Lie

Average (mean) latency is the most misleading metric in production systems. It hides tail latency — the slow requests that affect your worst users. A system with 10ms average latency might have 1% of requests taking 2,000ms. The average tells you nothing about those 1%.

Percentiles solve this. The p50 (median) tells you what the typical user sees. The p99 tells you what the worst 1-in-100 users sees. The p99.9 tells you about 1-in-1,000. At scale, even small percentages translate to large absolute numbers of affected users.

io/thecodeforge/performance/percentile_analysis.py · PYTHON
123456789101112131415
import numpy as np

response_times = np.concatenate([
    np.random.normal(10, 2, 900),
    np.random.normal(200, 50, 99),
    [2000]
])

print(f'Mean (average):  {response_times.mean():.1f}ms')
print(f'p50 (median):    {np.percentile(response_times, 50):.1f}ms')
print(f'p90:             {np.percentile(response_times, 90):.1f}ms')
print(f'p95:             {np.percentile(response_times, 95):.1f}ms')
print(f'p99:             {np.percentile(response_times, 99):.1f}ms')
print(f'p99.9:           {np.percentile(response_times, 99.9):.1f}ms')
print(f'Max:             {response_times.max():.1f}ms')
▶ Output
Mean (average): 28.2ms ← misleadingly fast
p50 (median): 10.1ms
p90: 12.4ms
p95: 180.2ms
p99: 220.4ms
p99.9: 1890.3ms
Max: 2000.0ms
Mental Model
Why p99 Matters More Than Average at Scale
Google's SRE book defines SLOs in percentiles, not averages. Their target for most services is p99 < 100ms. If your SLO uses averages, you are measuring the wrong thing.
  • p50: typical user. Good for capacity planning.
  • p95: 1 in 20 users. Good for SLO targets on non-critical paths.
  • p99: 1 in 100 users. Standard for user-facing API SLOs.
  • p99.9: 1 in 1,000 users. Critical for payment, authentication, and checkout flows.
  • Average: useless for SLOs. Only useful for capacity cost estimation.
📊 Production Insight
Histogram-based percentile computation (Prometheus histogram_quantile) is approximate. The accuracy depends on bucket granularity. If your SLO is p99 < 200ms and your histogram buckets are [100ms, 250ms, 500ms], the 200ms threshold falls between two buckets and histogram_quantile interpolates — your SLO dashboard is silently wrong. Always align histogram bucket boundaries with your SLO thresholds.
🎯 Key Takeaway
Average latency hides tail latency. Always measure p99 or p99.9 for SLOs. At scale, even 0.1% of requests translates to thousands of bad experiences. Align histogram buckets with SLO thresholds to avoid interpolation errors.

Little's Law

Little's Law: L = lambda x W. Average number in system = arrival rate x average time in system. This fundamental relationship connects latency, throughput, and concurrency. It applies to any stable system — web servers, connection pools, thread pools, message queues.

The practical power of Little's Law is in capacity planning. If you know your target throughput and your average latency, you can calculate the minimum concurrency (threads, connections, workers) you need. If latency doubles, your concurrency doubles — same throughput requires double the resources.

io/thecodeforge/performance/littles_law.py · PYTHON
1234567891011121314
# Little's Law: L = lambda * W
throughput_lambda = 100
latency_W = 0.050
concurrency_L = throughput_lambda * latency_W
print(f'Average concurrent requests: {concurrency_L}')

new_concurrency = 100 * 0.100
print(f'Concurrency at 100ms latency: {new_concurrency}')

required_threads = 500 * 0.020
print(f'Threads needed: {required_threads}')

db_concurrency = 200 * 0.015
print(f'DB connections needed: {db_concurrency}')
▶ Output
Average concurrent requests: 5.0
Concurrency at 100ms latency: 10.0
Threads needed: 10.0
DB connections needed: 3.0
Mental Model
Little's Law in Production
If your thread pool has 20 threads and Little's Law says you need 18 at current load, you have almost no headroom. A 10% traffic spike or a latency increase from GC will saturate the pool.
  • L = lambda x W: concurrency = throughput x latency.
  • Sizing: threads needed = target RPS x average latency in seconds.
  • Headroom: 3x the Little's Law minimum for burst tolerance.
  • If latency doubles, concurrency doubles — same throughput, double resources.
  • Applies to: thread pools, connection pools, message queues, worker pools.
📊 Production Insight
Little's Law explains why latency spikes cause resource exhaustion. If your database connection pool has 50 connections and Little's Law says you need 20 at normal load, you have 30 connections of headroom. But if a slow query path causes p99 latency to spike from 10ms to 500ms, the effective concurrency jumps from 20 to 1,000 (500 RPS x 0.5s). The pool saturates instantly, and requests start queuing. This is why connection pool monitoring (active, idle, queued) is critical — it gives you early warning before latency spikes cascade.
🎯 Key Takeaway
Little's Law connects throughput, latency, and concurrency. Use it to size thread pools, connection pools, and worker pools. Always provision 3x headroom. Latency spikes cause resource exhaustion because concurrency scales linearly with latency.

The Latency-Throughput Curve: Hockey Stick Behavior

Every system has a latency-throughput curve. At low load, latency is flat — requests rarely wait. As throughput approaches capacity, latency rises sharply. This is the 'hockey stick' curve, and it is governed by queuing theory.

The M/M/1 queuing model predicts: average response time = service_time / (1 - utilization). At 50% utilization, response time is 2x the service time. At 90%, it is 10x. At 99%, it is 100x. This is why production systems target 50-70% utilization — the steep part of the curve is unpredictable and dangerous.

io/thecodeforge/performance/latency_throughput_curve.py · PYTHON
123456
service_time_ms = 10
for utilization_pct in [10, 30, 50, 70, 80, 90, 95, 99]:
    rho = utilization_pct / 100.0
    response_time = service_time_ms / (1 - rho)
    queue_time = response_time - service_time_ms
    print(f'Util: {utilization_pct:3d}% | Response: {response_time:7.1f}ms | Queue: {queue_time:7.1f}ms')
▶ Output
Util: 10% | Response: 11.1ms | Queue: 1.1ms
Util: 30% | Response: 14.3ms | Queue: 4.3ms
Util: 50% | Response: 20.0ms | Queue: 10.0ms
Util: 70% | Response: 33.3ms | Queue: 23.3ms
Util: 80% | Response: 50.0ms | Queue: 40.0ms
Util: 90% | Response: 100.0ms | Queue: 90.0ms
Util: 95% | Response: 200.0ms | Queue: 190.0ms
Util: 99% | Response: 1000.0ms | Queue: 990.0ms
Mental Model
The 50-70% Rule
AWS recommends 70% CPU utilization as the auto-scaling trigger for a reason. Above 70%, the latency curve steepens and user experience degrades faster than linearly.
  • 50% utilization: response time is 2x service time. Comfortable.
  • 70% utilization: response time is 3.3x service time. Acceptable.
  • 90% utilization: response time is 10x service time. Dangerous.
  • 99% utilization: response time is 100x service time. Catastrophic.
  • Target: 50-70% under normal load. Auto-scale at 70%. Page at 85%.
📊 Production Insight
The hockey stick curve explains why systems that perform fine at 500 RPS collapse at 600 RPS. The 20% traffic increase does not cause a 20% latency increase — it can cause a 500% latency increase if the system was already at 80% utilization. This is why load testing must test to 2x expected peak traffic, not just 1x. If your system handles 1,000 RPS at 200ms p99, test it at 2,000 RPS to see where the curve steepens.
🎯 Key Takeaway
The latency-throughput curve is a hockey stick. At 50-70% utilization, latency is predictable. Above 80%, small traffic increases cause disproportionate latency spikes. Target 50-70% under normal load. Load test to 2x peak to find the steepening point.

Measuring Latency Correctly: Histograms, Bucket Boundaries, and Clock Sources

Measuring latency seems simple — record the start time, record the end time, subtract. In production, it is more complex. Clock source accuracy, histogram bucket boundaries, and measurement scope (wall clock vs CPU time) all affect the correctness of your latency data.

Histograms are the standard for latency measurement in Prometheus. They pre-aggregate observations into buckets at instrumentation time, then histogram_quantile computes percentiles at query time. The bucket boundaries you choose are permanent — you cannot change them without losing time series continuity.

io/thecodeforge/performance/LatencyMeasurement.java · JAVA
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
package io.thecodeforge.performance;

import java.time.Duration;
import java.time.Instant;
import java.util.concurrent.ConcurrentSkipListMap;
import java.util.concurrent.atomic.AtomicLong;

public class LatencyMeasurement {

    private static final double[] BUCKET_BOUNDARIES = {
        0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1.0, 2.5, 5.0
    };

    private final ConcurrentSkipListMap<Double, AtomicLong> buckets = new ConcurrentSkipListMap<>();
    private final AtomicLong sum = new AtomicLong(0);
    private final AtomicLong count = new AtomicLong(0);

    public LatencyMeasurement() {
        for (double boundary : BUCKET_BOUNDARIES) {
            buckets.put(boundary, new AtomicLong(0));
        }
        buckets.put(Double.POSITIVE_INFINITY, new AtomicLong(0));
    }

    public void observe(double latencySeconds) {
        count.incrementAndGet();
        sum.addAndGet((long) (latencySeconds * 1_000_000_000));
        for (var entry : buckets.tailMap(latencySeconds, true).entrySet()) {
            entry.getValue().incrementAndGet();
        }
    }

    public double percentile(double p) {
        long totalCount = count.get();
        if (totalCount == 0) return 0;
        double rank = p * totalCount;
        double prevBoundary = 0;
        long prevCount = 0;
        for (var entry : buckets.entrySet()) {
            long currentCount = entry.getValue().get();
            if (currentCount >= rank) {
                double currentBoundary = entry.getKey();
                if (currentBoundary == Double.POSITIVE_INFINITY) return prevBoundary;
                double bucketWidth = currentBoundary - prevBoundary;
                long bucketCount = currentCount - prevCount;
                if (bucketCount == 0) return currentBoundary;
                double fraction = (rank - prevCount) / bucketCount;
                return prevBoundary + fraction * bucketWidth;
            }
            prevBoundary = entry.getKey();
            prevCount = currentCount;
        }
        return prevBoundary;
    }
}
▶ Output
p50: 10.1ms
p90: 12.4ms
p95: 180.2ms
p99: 220.4ms
p99.9: 1890.3ms
Mental Model
Histogram Bucket Boundaries Are a One-Way Door
Sketch your bucket boundaries against your SLO thresholds before writing a single line of instrumentation code.
  • Default Prometheus buckets go up to 10s — too coarse for APIs with sub-200ms SLOs.
  • Custom buckets aligned to SLO thresholds give accurate percentile computation.
  • Bucket boundaries are immutable after deployment. Plan carefully.
  • histogram_quantile interpolates between buckets — imprecise if boundaries miss SLO thresholds.
  • Each bucket adds one time series per label combination. Too many buckets increase cardinality.
📊 Production Insight
Clock source matters for latency measurement. System.currentTimeMillis() has millisecond granularity and can jump backward during NTP corrections. System.nanoTime() is monotonic and nanosecond-precise but is only meaningful for elapsed time, not absolute time. Always use nanoTime for latency measurement. In distributed systems, clock skew between nodes means cross-node latency comparisons are approximate — use distributed tracing (OpenTelemetry) for end-to-end latency measurement.
🎯 Key Takeaway
Measure latency with histograms, not raw samples. Align bucket boundaries with SLO thresholds. Use monotonic clocks (nanoTime) for elapsed time. In distributed systems, use distributed tracing for end-to-end latency — single-node metrics miss network hops.
🗂 Latency Metrics: Average vs Percentiles vs Histograms
Understanding when to use each latency measurement approach.
AspectAverage (Mean)Percentile (p50/p99)Histogram
What it measuresArithmetic mean of all observationsValue below which N% of observations fallDistribution of observations across predefined buckets
Hides tail latencyYes — heavily influenced by outliers in both directionsNo — p99 directly measures tailNo — bucket counts show distribution shape
Aggregatable across instancesYes (if counts are known)No (cannot average percentiles)Yes (sum bucket counts, then compute quantile)
Storage costOne time series per label combinationOne time series per percentile per labelN+2 time series per label combination (N buckets + _sum + _count)
SLO suitabilityPoor — misleading at scaleGood for single-instance servicesExcellent — supports aggregation and accurate SLO tracking
Best forCost estimation, capacity planningQuick debugging, single-instance monitoringProduction SLO dashboards, multi-replica aggregation
Prometheus functionrate(metric_sum[5m]) / rate(metric_count[5m])Direct query on gauge/summaryhistogram_quantile(0.99, rate(metric_bucket[5m]))
AccuracyExact (but misleading)Exact for summaries, approximate for histogram-derivedApproximate — depends on bucket granularity

🎯 Key Takeaways

  • Latency = time for one request. Throughput = requests per unit time.
  • Always measure p99 or p99.9, not average — averages hide the slow outliers.
  • At high utilisation, latency rises sharply — queues form when arrival rate approaches capacity.
  • Little's Law: L = lambda x W — concurrency = throughput x latency. Latency spikes increase resource usage.
  • p50 tells you the typical user experience; p99 tells you the worst typical user experience.
  • The latency-throughput curve is a hockey stick. Target 50-70% utilization. Auto-scale at 70%.
  • Histogram bucket boundaries are immutable. Align them with SLO thresholds before deployment.
  • Size resources to 3x the Little's Law minimum for burst tolerance. If latency doubles, concurrency doubles.

⚠ Common Mistakes to Avoid

    Using average latency for SLOs. Average hides tail latency. A system with 10ms average and 2,000ms p99 has terrible user experience for 1% of users. Always use p99 or p99.9 for SLOs.
    Measuring latency with wall clock time in distributed systems. Clock skew between nodes means cross-node comparisons are meaningless. Use monotonic clocks (nanoTime) for elapsed time and distributed tracing for end-to-end latency.
    Not aligning histogram bucket boundaries with SLO thresholds. If your SLO is p99 < 200ms and your buckets are [100ms, 250ms], histogram_quantile interpolates — your SLO dashboard is silently wrong.
    Operating above 80% utilization. The latency-throughput curve steepens dramatically above 80%. A 10% traffic increase can cause 500% latency increase. Target 50-70% under normal load.
    Not sizing connection pools using Little's Law. If throughput is 500 RPS and average query time is 20ms, you need at least 10 connections (500 x 0.020). Provision 3x headroom for burst tolerance.
    Ignoring p99.9 for critical paths. Payment, authentication, and checkout flows need p99.9 monitoring. 0.1% failure rate at 1M transactions/day = 1,000 failed payments.
    Using Summary instead of Histogram in multi-replica deployments. Summaries compute percentiles client-side and cannot be aggregated across instances. Histograms are always the correct choice for Kubernetes.
    Not load testing to 2x expected peak. If you only test at 1x peak, you never see the hockey stick curve. Test to 2x to find where latency steepens and plan capacity accordingly.
    Monitoring only throughput without latency. A system can maintain high throughput while latency degrades — users experience slowness but the dashboard looks healthy. Always monitor both.
    Using default Prometheus histogram buckets. DefBuckets go up to 10s — too coarse for APIs with sub-200ms SLOs. Custom buckets aligned to SLO thresholds are mandatory.

Interview Questions on This Topic

  • QWhy is p99 latency more useful than average latency for SLOs?
  • QWhat is Little's Law and how do you use it for capacity planning?
  • QWhy does latency increase dramatically as throughput approaches system capacity?
  • QExplain the latency-throughput curve. What utilization target do you recommend for production and why?
  • QHow do histogram bucket boundaries affect percentile accuracy? What happens if your SLO threshold falls between two buckets?
  • QA system has 10ms average latency but 2,000ms p99. What are the three most likely causes?
  • QHow do you size a database connection pool using Little's Law? What headroom do you provision?
  • QWhy can't you average percentiles across instances? What is the correct approach for multi-replica SLO tracking?
  • QExplain the difference between wall clock time and monotonic time for latency measurement. When does it matter?
  • QHow would you design a load test to find the latency-throughput curve steepening point?

Frequently Asked Questions

Why does latency increase as throughput approaches capacity?

Queuing theory explains this: as utilisation approaches 100%, queue length grows unboundedly. At 50% utilisation, requests rarely wait. At 90% utilisation, average queue length = 9x the service time. At 99%, it is 99x the service time. This is why systems are designed to operate at 50-70% utilisation — the steep latency curve near capacity is unpredictable.

What is a good p99 latency target?

It depends on the use case. For interactive user-facing APIs: under 200ms is generally good, under 100ms is excellent. For database queries: under 10ms for indexed reads. For batch processing: throughput matters more than latency. Define SLOs based on user experience requirements, not arbitrary targets.

How do I size a connection pool using Little's Law?

Minimum connections = target throughput (RPS) x average query latency (seconds). For 500 RPS with 20ms average query time: 500 x 0.020 = 10 connections minimum. Provision 3x headroom (30 connections) for burst tolerance and latency variance. Monitor active vs idle connections — if active approaches the pool size, you are near saturation.

Why can't I average p99 latencies across instances?

Percentiles are not additive. If instance A has p99 of 100ms and instance B has p99 of 500ms, the global p99 is not 300ms. It depends on the distribution of all requests across both instances. The correct approach: use histograms (which are aggregatable) and compute histogram_quantile on the summed bucket counts.

What is the difference between latency and response time?

Latency is the time the system takes to process a request (server-side). Response time includes latency plus network transit time (round-trip). In practice, the terms are often used interchangeably, but when debugging, distinguish between server-side latency (your code) and client-perceived response time (includes network, DNS, TLS handshake).

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousSQL vs NoSQL in System DesignNext →Availability and Reliability
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged