Senior 3 min · March 30, 2026

gRPC vs REST — Unbounded Stream Buffers Cause OOM

gRPC streaming caused 4GB heap OOM every 6 hours from unbounded buffers in production.

N
Naren · Founder
Plain-English first. Then code. Then the interview question.
About
 ● Production Incident 🔎 Debug Guide
Quick Answer
  • gRPC uses Protocol Buffers (binary) over HTTP/2; REST uses JSON over HTTP/1.1 or HTTP/2
  • gRPC supports four streaming patterns natively; REST needs workarounds like WebSockets
  • gRPC is 2-5x faster for frequent small-payload calls; REST is baseline
  • gRPC requires code-generated clients; REST works with any HTTP client
  • Biggest mistake: using gRPC for public APIs without a REST gateway
Plain-English First

REST uses HTTP and JSON — the universal language of the web. Any browser, curl command, or HTTP client speaks it natively. gRPC uses Protocol Buffers and HTTP/2 — a binary format that's faster and more efficient but requires a code-generated client. REST is the front door everyone can use. gRPC is the dedicated freight entrance for services that need to move a lot of data fast.

I've built and maintained both REST and gRPC APIs in production. The decision is rarely about technical superiority — both work. It's about the consumers and their requirements. External API consumed by customers? REST. Internal microservice-to-microservice communication that processes 50,000 RPCs per second? gRPC. The mistake I see is teams defaulting to one or the other without asking 'who is consuming this and what do they need?'

In 2022, we migrated an internal order-processing service from REST to gRPC. The service handled inter-service communication between six microservices, averaging 30,000 requests per minute. After migration: 40% reduction in serialisation overhead, 60% reduction in request latency at P99. The consumer apps stayed on REST. The internal fabric went gRPC. Both in production, each doing what it's good at.

Protocol and Serialisation: Where the Performance Difference Comes From

REST typically sends JSON over HTTP/1.1. JSON is text — human readable, but verbose. Every field name is repeated as a string in every message. HTTP/1.1 opens a new TCP connection per request (or reuses one with keep-alive, but still has head-of-line blocking).

gRPC uses Protocol Buffers (protobuf) over HTTP/2. Protobuf is binary — fields are identified by integer tags, not string names. The same data that takes 200 bytes as JSON might take 40 bytes as protobuf. HTTP/2 multiplexes multiple requests over a single TCP connection and supports full-duplex streaming.

The performance difference is real but often overstated in benchmarks that don't reflect production conditions. The 3-5x latency improvement cited for gRPC over REST usually holds for high-frequency, small-payload inter-service calls. For large payloads or infrequent calls, the difference shrinks.

PaymentServiceGrpc.javaJAVA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
// ── 1. Define the contract in .proto file ──────────────────────────────────
// File: src/main/proto/payment.proto
/*
syntax = "proto3";
option java_package = "io.thecodeforge.payment.grpc";
option java_outer_classname = "PaymentProto";

service PaymentService {
  // Unary RPC — one request, one response (like REST)
  rpc ProcessPayment(PaymentRequest) returns (PaymentResponse);

  // Server streaming — one request, stream of responses
  rpc StreamTransactions(TransactionFilter) returns (stream Transaction);

  // Client streaming — stream of requests, one response
  rpc BatchPayments(stream PaymentRequest) returns (BatchResult);

  // Bidirectional streaming — both sides stream
  rpc PaymentFeed(stream PaymentEvent) returns (stream PaymentEvent);
}

message PaymentRequest {
  string customer_id = 1;
  int64 amount_pence = 2;
  string currency    = 3;
}

message PaymentResponse {
  string payment_id = 1;
  string status     = 2;
  int64  timestamp  = 3;
}
*/

// ── 2. Generated server implementation ─────────────────────────────────────
package io.thecodeforge.payment.grpc;

import io.grpc.stub.StreamObserver;

public class PaymentServiceImpl
    extends PaymentServiceGrpc.PaymentServiceImplBase {

    @Override
    public void processPayment(
            PaymentRequest request,
            StreamObserver<PaymentResponse> responseObserver) {

        // Business logic
        String paymentId = processInternally(
            request.getCustomerId(),
            request.getAmountPence(),
            request.getCurrency()
        );

        // Build protobuf response (binary, not JSON)
        PaymentResponse response = PaymentResponse.newBuilder()
            .setPaymentId(paymentId)
            .setStatus("COMPLETED")
            .setTimestamp(System.currentTimeMillis())
            .build();

        responseObserver.onNext(response);
        responseObserver.onCompleted();
    }

    private String processInternally(String customerId, long amount, String currency) {
        // Payment processing logic
        return "pay-" + System.nanoTime();
    }
}

// ── 3. Client usage — generated stub handles serialisation ─────────────────
package io.thecodeforge.payment.grpc;

import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;

public class PaymentClient {
    public static void main(String[] args) {
        ManagedChannel channel = ManagedChannelBuilder
            .forAddress("payment-service.internal", 50051)
            .usePlaintext()
            .build();

        PaymentServiceGrpc.PaymentServiceBlockingStub stub =
            PaymentServiceGrpc.newBlockingStub(channel);

        PaymentRequest request = PaymentRequest.newBuilder()
            .setCustomerId("customer-42")
            .setAmountPence(10000)  // £100.00
            .setCurrency("GBP")
            .build();

        PaymentResponse response = stub.processPayment(request);
        System.out.println("Payment ID: " + response.getPaymentId());
        System.out.println("Status: "    + response.getStatus());
    }
}
Output
Payment ID: pay-1234567890123
Status: COMPLETED
# Wire comparison (same PaymentRequest):
# JSON: {"customer_id":"customer-42","amount_pence":10000,"currency":"GBP"} → 62 bytes
# Protobuf: binary encoded → 21 bytes
# ~3x smaller on the wire
Production Insight
json serialisation overhead becomes a real cost above ~10k req/min.
protobuf encoding is not just smaller — it avoids the string parsing that spikes CPU.
observe the p99 latency shift when switching from Jackson to protobuf in your call chain.
Key Takeaway
protobuf beats json on wire size and parsing speed.
the difference matters most at scale.
never benchmark without realistic payload sizes and concurrency patterns.
Serialisation Choice
IfPayloads under 1KB, high frequency (>100 req/s)
UsegRPC/protobuf gives 3-5x latency improvement
IfPayloads over 100KB, low frequency
UseREST/JSON is fine — protobuf encoding cost offsets size advantage

REST: Why It Still Wins for Most APIs

REST's dominance for public and external APIs isn't about technical merit — it's about ubiquity. Every HTTP client in every language speaks REST. Browsers speak REST natively. curl speaks REST. Postman speaks REST. Your customers' mobile app developers know REST. Your partners' integration teams know REST.

gRPC requires a generated client from the .proto definition. Your API consumers need to run protoc, add gRPC dependencies, and handle the generated code. For internal services under your control, this is a minor inconvenience. For external API consumers who might be using PHP, Ruby, or a legacy Java stack, it's a barrier.

REST also wins on tooling maturity: API gateways, load balancers, proxies, observability tools, and browsers all understand HTTP/JSON natively. gRPC on HTTP/2 requires specific support at every layer.

PaymentRestController.javaJAVA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
package io.thecodeforge.payment.rest;

import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/api/v1/payments")
public class PaymentRestController {

    private final PaymentService paymentService;

    public PaymentRestController(PaymentService paymentService) {
        this.paymentService = paymentService;
    }

    @PostMapping
    public ResponseEntity<PaymentResponse> processPayment(
            @RequestBody PaymentRequest request) {

        // REST: consumed by any HTTP client, browser, curl, Postman
        // No code generation required. Request/response in JSON.
        PaymentResult result = paymentService.process(
            request.getCustomerId(),
            request.getAmountPence(),
            request.getCurrency()
        );

        return ResponseEntity.ok(new PaymentResponse(
            result.getPaymentId(),
            result.getStatus().name(),
            result.getTimestamp()
        ));
    }

    @GetMapping("/{paymentId}")
    public ResponseEntity<PaymentResponse> getPayment(@PathVariable String paymentId) {
        return paymentService.findById(paymentId)
            .map(p -> ResponseEntity.ok(new PaymentResponse(p)))
            .orElse(ResponseEntity.notFound().build());
    }
}

// curl -X POST https://api.thecodeforge.io/v1/payments \
//   -H 'Content-Type: application/json' \
//   -d '{"customerId":"c-42","amountPence":10000,"currency":"GBP"}'
//
// Response:
// {"paymentId":"pay-1234","status":"COMPLETED","timestamp":1711756800000}
Output
HTTP/1.1 200 OK
Content-Type: application/json
{
"paymentId": "pay-1234567890",
"status": "COMPLETED",
"timestamp": 1711756800000
}
You can use both in the same system
A common production pattern: REST-facing API gateway that external consumers call, internally routing to gRPC microservices. The gateway translates HTTP/JSON to protobuf. External consumers get REST ergonomics. Internal services get gRPC performance. Tools like gRPC-Gateway or Envoy proxy make this straightforward. We run this exact architecture on a payments platform handling £2M daily volume.
Production Insight
forcing gRPC on external partners often results in abandonment or custom wrappers.
the cost of supporting protobuf clients for every consumer outweighs performance gains.
if you must expose gRPC externally, provide a REST proxy as a fallback.
Key Takeaway
REST wins on ecosystem penetration.
your api's success depends on consumers’ ability to consume it.
ubiquity trumps raw speed for public interfaces.

Streaming: The Capability REST Can't Match

gRPC's streaming support is its genuinely unique capability. REST over HTTP/1.1 is inherently request-response. Long polling, WebSockets, and Server-Sent Events are REST workarounds for streaming — they work but they're not native to the protocol.

Unary: one request, one response. Same as REST.

Server streaming: one request, stream of responses. Useful for: real-time notifications, progress updates, large dataset pagination without polling.

Client streaming: stream of requests, one response. Useful for: bulk ingestion, file upload with progress tracking.

Bidirectional streaming: both sides stream simultaneously. Useful for: real-time chat, collaborative editing, live dashboards.

If your use case involves any of these patterns, gRPC's native streaming is significantly cleaner than working around REST's limitations.

TransactionStreamingService.javaJAVA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
package io.thecodeforge.payment.grpc;

import io.grpc.stub.StreamObserver;
import java.util.List;

public class TransactionStreamingServiceImpl
    extends TransactionServiceGrpc.TransactionServiceImplBase {

    private final TransactionRepository transactionRepo;

    public TransactionStreamingServiceImpl(TransactionRepository repo) {
        this.transactionRepo = repo;
    }

    // Server streaming: client sends one filter, server streams matching transactions
    // Useful for: exporting large datasets without pagination loops
    @Override
    public void streamTransactions(
            TransactionFilter filter,
            StreamObserver<Transaction> responseObserver) {

        // Fetch and stream — client receives each transaction as it's sent
        // No need to buffer the entire result set in memory
        transactionRepo.findByCustomerIdAndDateRange(
            filter.getCustomerId(),
            filter.getFromTimestamp(),
            filter.getToTimestamp()
        ).forEach(tx -> {
            Transaction proto = Transaction.newBuilder()
                .setTransactionId(tx.getId())
                .setAmountPence(tx.getAmountPence())
                .setCurrency(tx.getCurrency())
                .setTimestamp(tx.getTimestamp())
                .build();
            responseObserver.onNext(proto);  // Each transaction sent immediately
        });

        responseObserver.onCompleted();
    }

    // Client streaming: client sends batch of payments, server responds once
    @Override
    public StreamObserver<PaymentRequest> batchPayments(
            StreamObserver<BatchResult> responseObserver) {

        return new StreamObserver<>() {
            int processed = 0;
            int failed = 0;

            @Override
            public void onNext(PaymentRequest request) {
                // Process each payment as it arrives from client
                try {
                    processPayment(request);
                    processed++;
                } catch (Exception e) {
                    failed++;
                }
            }

            @Override
            public void onCompleted() {
                // Client done sending — send summary response
                responseObserver.onNext(BatchResult.newBuilder()
                    .setProcessed(processed)
                    .setFailed(failed)
                    .build());
                responseObserver.onCompleted();
            }

            @Override
            public void onError(Throwable t) {
                // Handle client-side error
            }
        };
    }
}
Output
// Server streaming output (client side):
// Received: Transaction{id='tx-001', amount=10000, currency='GBP'}
// Received: Transaction{id='tx-002', amount=25000, currency='GBP'}
// Received: Transaction{id='tx-003', amount=5500, currency='USD'}
// Stream completed. 3 transactions received.
// Batch payments output:
// BatchResult{processed=47, failed=3}
Production Insight
streaming without backpressure is a memory leak waiting to happen.
always test streaming under realistic concurrency — one slow client can crash the server.
monitor the number of open streams; set a maximum via an interceptor.
Key Takeaway
native streaming is gRPC's killer feature.
rest workarounds are brittle and add operational complexity.
if your data flows in one direction for >1 second, streaming is the right choice.

Error Handling and Status Codes: Differences That Matter in Production

REST uses HTTP status codes: 200 for success, 400 for bad request, 404 for not found, 500 for server error. These are well-understood but limited. Custom error messages go in the response body and are not standardised across implementations.

gRPC defines a set of status codes in protobuf (like INVALID_ARGUMENT, NOT_FOUND, INTERNAL, UNAVAILABLE) that are standard across all implementations. Every gRPC response includes a status code and an optional error message. Additionally, gRPC supports rich error models through the google.rpc.Status proto, including error details with structured information.

The critical difference is that gRPC's error model is consistent across all client languages — the same status code means the same thing in Java, Go, or Python. REST depends entirely on documentation and convention.

GrpcErrorHandling.javaJAVA
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
package io.thecodeforge.payment.grpc;

import io.grpc.Status;
import io.grpc.StatusRuntimeException;
import io.grpc.protobuf.StatusProto;
import com.google.rpc.Code;
import com.google.rpc.RetryInfo;
import com.google.protobuf.Duration;

public class GrpcErrorHandling {

    // ❌ Wrong: throwing generic Status.INTERNAL
    public void processPaymentBad(String id) {
        if (id == null) {
            throw Status.INTERNAL
                .withDescription("null customer id")
                .asRuntimeException();
        }
        // This gives no indication to client that it's a client error
    }

    // ✅ Correct: using the specific status code with rich error details
    public void processPaymentGood(String id) {
        if (id == null) {
            // Build a structured error with retry information
            com.google.rpc.Status status = com.google.rpc.Status.newBuilder()
                .setCode(Code.INVALID_ARGUMENT.getNumber())
                .setMessage("customer id must not be null")
                .addDetails(
                    RetryInfo.newBuilder()
                        .setRetryDelay(Duration.newBuilder()
                            .setSeconds(5)
                            .build())
                        .build()
                )
                .build();
            throw StatusProto.toStatusRuntimeException(status);
        }
    }

    // Client side: different codes lead to different recovery
    public static void main(String[] args) {
        try {
            // call grpc service
        } catch (StatusRuntimeException e) {
            switch (e.getStatus().getCode()) {
                case INVALID_ARGUMENT:
                    // Log and stop retrying — bad request
                    break;
                case UNAVAILABLE:
                    // Retry with backoff — transient
                    break;
                case DEADLINE_EXCEEDED:
                    // Increase timeout or check downstream
                    break;
            }
        }
    }
}
Output
// Client output for INVALID_ARGUMENT error:
// StatusRuntimeException: INVALID_ARGUMENT: customer id must not be null
// Retry info: retry after 5 seconds
Mistake-Proof Your gRPC Status Codes
  • INVALID_ARGUMENT: client sent bad data — don't retry.
  • UNAVAILABLE: server can't serve — retry with backoff.
  • DEADLINE_EXCEEDED: slow processing — reduce load or increase timeout.
  • INTERNAL: unexpected server error — log, alert, fix code.
  • Teach your team this mapping to avoid debugging incidents.
Production Insight
using INTERNAL for everything hides the root cause from clients.
without structured errors, clients cannot differentiate between retryable and fatal.
always map domain errors to the most specific gRPC status code.
Key Takeaway
use specific status codes to let clients react intelligently.
rest forces clients to guess from ad-hoc error bodies.
gRPC's standard codes make distributed error handling a solvable problem.

Tooling and Ecosystem: Where REST Still Leads

REST has decades of tooling maturity. Postman, Insomnia, curl, and browser dev tools all understand HTTP/JSON natively. API gateways like Kong, AWS API Gateway, and Nginx handle REST without configuration. Monitoring tools parse HTTP methods, status codes, and response times out of the box. Load balancers distribute REST requests without special setup.

gRPC tooling has improved but still lags. grpcurl and BloomRPC exist, but they're not as popular as Postman. API gateways need explicit HTTP/2 support and gRPC-web translation for browsers. Monitoring gRPC calls requires special instrumentation: you need the service name, method, and status code — not just HTTP metadata. Some tools now support gRPC natively (like gRPC reflection for service discovery), but the gap is real.

If your team is small or you need to onboard junior engineers quickly, the REST toolchain gives everyone a head start. gRPC requires learning protoc, generated stubs, and a new debugging workflow.

debug-tools.shBASH
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/bash
# Debugging tools comparison

# REST: universally available
curl https://api.thecodeforge.io/v1/payments/pay-123 -H 'Accept: application/json'

# gRPC: requires grpcurl and reflection or proto descriptors
grpcurl -plaintext payment-service.internal:50051 list
grpcurl -plaintext payment-service.internal:50051 io.thecodeforge.payment.grpc.PaymentService/ProcessPayment

# REST: easy to inspect in browser devtools
echo "Open Chrome DevTools → Network tab → any request"

# gRPC: need proxy or packet capture
echo "Use tcpdump or grpcurl with -d '{}' to send test calls"

# REST: metrics in every monitoring tool
curl http://prometheus:9090/api/v1/query?query=http_requests_total{handler="/api/v1/payments"}

# gRPC: metrics require custom interceptors (micrometer or prometheus client)
echo "Add gRPC server interceptor: new MetricsServerInterceptor(...)"
Output
// Example grpcurl output for server streaming:
.
Service: io.thecodeforge.payment.grpc.PaymentService
Method: ProcessPayment (Unary)
Method: StreamTransactions (Server streaming)
...
// Prometheus metric for gRPC:
grpc_server_handled_total{grpc_service="PaymentService", grpc_method="ProcessPayment", grpc_code="OK"} 1245
Production Insight
debugging a broken gRPC call without grpcurl is like debugging curl without httpie.
always deploy gRPC reflection in non-production environments — it saves hours of guessing.
invest in REST tooling for external debugging; invest in gRPC interceptors for internal observability.
Key Takeaway
rest eco-system is 10 years ahead on debugging and monitoring.
grpc is catching up but requires intentional tooling investment.
choose based on your current team's proficiency and support infrastructure.
● Production incidentPOST-MORTEMseverity: high

gRPC Service Outage Due to Unbounded Stream Buffers

Symptom
gRPC service running on Kubernetes with 4GB heap crashes every 6 hours with OutOfMemoryError. Pods restart, but during restart the downstream services get connection errors.
Assumption
The team assumed gRPC's built-in flow control would prevent memory issues. They thought HTTP/2's stream multiplexing would handle backpressure automatically.
Root cause
The server-side implementation queued all outgoing messages in an in-memory list before calling onNext(). Under high load, the client consumed messages slower than the server produced them. gRPC's flow control paused the stream, but the server continued buffering because the StreamObserver didn't have a feedback mechanism. Heap grew until OOM.
Fix
Replaced the in-memory buffer with a bounded blocking queue. Set a maximum buffer size per stream. Added a direct flow-control check: if queue is full, reject the request with status RESOURCE_EXHAUSTED. Also enabled per-stream memory limits in the gRPC interceptor.
Key lesson
  • gRPC flow control stops transmission, but it does not stop you from buffering on the server side. Always pair streaming with bounded buffers.
  • Monitor JVM heap per stream during development — a single misbehaving client can eat all memory.
  • Use gRPC's FlowControlHandler or custom interceptors to enforce per-stream backpressure.
Production debug guideSymptom→Action guide for common gRPC and REST production failures4 entries
Symptom · 01
gRPC call fails with UNAVAILABLE status
Fix
Check if server is reachable: grpcurl -plaintext <host>:<port> list. If unreachable, verify DNS and firewall. If reachable, check the load balancer's HTTP/2 support — some LBs downgrade to HTTP/1.1.
Symptom · 02
REST endpoint returns 502 Bad Gateway
Fix
Check upstream service health endpoint. If healthy, look at timeout configuration in the API gateway. Many 502s come from gateway timeout settings mismatched with upstream processing time.
Symptom · 03
gRPC streaming call slowly consumes more memory
Fix
Attach heap dump and look for large byte arrays held by netty or gRPC OutboundMessageQueue. Use jmap -heap:format=b and analyze with Eclipse MAT. Look for 'io.grpc.internal.MessageDeframer' instances.
Symptom · 04
REST API response times increase linearly with concurrent requests
Fix
Check connection pool exhaustion. Use netstat to see TIME_WAIT connections. Increase max connections or switch to HTTP/2 to multiplex.
★ Quick Debug Cheat Sheet: gRPC & RESTCommands and immediate actions for the most common production issues.
gRPC call returns DEADLINE_EXCEEDED
Immediate action
Check downstream service latency. Then verify deadline propagation across service chain.
Commands
grpcurl -plaintext -d '{}' <host>:<port> <service>/<method> | head -c 200
kubectl logs -l app=<service> --tail=100 | grep -i deadline
Fix now
Increase the default gRPC deadline from 5s to 15s in the client channel builder: .deadlineAfter(15, TimeUnit.SECONDS)
REST endpoint returns 429 Too Many Requests+
Immediate action
Check rate limiter configuration and current throughput.
Commands
tail -n 100 /var/log/nginx/access.log | grep ' 429' | wc -l
curl -I https://api.example.com/health | grep -i rate
Fix now
Temporarily increase rate limit from 100 to 200 requests per minute for critical clients: kubectl edit configmap rate-limiter-config
gRPC client shows 'channel in TRANSIENT_FAILURE'+
Immediate action
Verify server is listening and client can reach the target endpoint.
Commands
nc -zv <host> <port>
grpcurl -plaintext <host>:<port> list 2>&1
Fix now
Restart the gRPC server if no other errors. If behind k8s, restart pods: kubectl rollout restart deployment <name>
REST API returning partial data or missing fields+
Immediate action
Check if a new version of the API was deployed without updating the client contract.
Commands
diff <(curl -s <old-version-url>/endpoint) <(curl -s <new-version-url>/endpoint)
curl -H 'Accept: application/json;version=2' https://api.example.com/v2/endpoint
Fix now
Rollback the API to previous version and add proper version negotiation.
gRPC vs REST at a Glance
AspectRESTgRPC
ProtocolHTTP/1.1 or HTTP/2HTTP/2 only
SerialisationJSON (text)Protocol Buffers (binary)
ContractOpenAPI/Swagger (optional).proto file (required)
Code generationOptionalRequired (client + server stubs)
Browser supportNativeRequires grpc-web proxy
StreamingWorkarounds (SSE, WebSocket)Native (4 patterns)
PerformanceBaseline~2-5x faster for frequent calls
Error handlingHTTP status codes + custom bodyStandardised status codes + rich error model
ToolingUniversal (Postman, curl, etc.)Specialised (BloomRPC, grpcurl)
Best forPublic APIs, external consumersInternal microservices, high-frequency calls

Key takeaways

1
REST over HTTP/JSON for external APIs, public-facing endpoints, and any consumer you don't control
ubiquity and tooling win.
2
gRPC over protobuf/HTTP/2 for internal microservice communication, high-frequency calls, and when streaming is required.
3
gRPC is 2-5x faster for frequent small-payload calls. For large payloads or infrequent calls, the difference is smaller.
4
The .proto file is the source of truth for a gRPC API
it enforces a strict contract, enables code generation in any language, and prevents the schema drift common in JSON APIs.
5
You don't have to choose
REST gateway for external traffic routing to internal gRPC services is a standard production pattern.

Common mistakes to avoid

4 patterns
×

Using gRPC for a public API without a REST/JSON facade

Symptom
External developers cannot easily consume gRPC without generated clients. They complain about setup complexity. Some abandon the API entirely.
Fix
Provide a REST gateway that translates HTTP/JSON to protobuf. Deploy gRPC-Gateway or Envoy in front of the gRPC service. Offer both gRPC and REST endpoints documented side by side.
×

Choosing REST for high-frequency internal service calls because 'it's simpler'

Symptom
At 10,000+ RPCs per minute, CPU usage spikes due to JSON serialisation. P99 latency grows due to HTTP/1.1 connection overhead and head-of-line blocking.
Fix
Migrate to gRPC for internal services that exchange small payloads frequently. Use protobuf for the serialisation improvement and HTTP/2 for multiplexing. The migration effort pays off within weeks at that scale.
×

Not defining .proto contracts upfront for gRPC

Symptom
After a few field modifications, the .proto file no longer matches the actual data structure. Field numbers are reused or changed, causing silent data corruption. Clients break at runtime without obvious errors.
Fix
Treat the .proto file as the single source of truth. Never change field numbers or types after the contract is in use. Use reserved fields for retired fields. Enforce schema validation in CI to detect drift.
×

Ignoring gRPC's built-in deadline/timeout propagation

Symptom
A chain of microservices (A→B→C) where only A sets a deadline. B and C have no deadline context, so B waits indefinitely for C. A eventually times out, but B is stuck handling a request that A already gave up on. Cascading timeout storms follow.
Fix
Always propagate the gRPC deadline via the Context. Use Context.current().withDeadlineAfter(...) and pass it to downstream calls. Set a maximum deadline at the edge service. Never create a new Context without inheriting the deadline.
INTERVIEW PREP · PRACTICE MODE

Interview Questions on This Topic

Q01JUNIOR
What are the main differences between gRPC and REST? When would you choo...
Q02SENIOR
Explain how Protocol Buffers differ from JSON and what performance impli...
Q03SENIOR
What are the four communication patterns in gRPC and when would you use ...
Q04SENIOR
You're designing a microservices architecture with 8 internal services a...
Q05SENIOR
What is gRPC-Gateway and why would you use it?
Q01 of 05JUNIOR

What are the main differences between gRPC and REST? When would you choose each?

ANSWER
gRPC uses Protocol Buffers over HTTP/2, supports four streaming patterns, requires code-generated clients, and has standardised status codes. REST uses JSON over HTTP/1.1 (or HTTP/2), is request-response only, works with any HTTP client, and uses HTTP status codes. Choose gRPC for internal microservice communication, high-throughput scenarios (>10k req/min), and when you need native streaming. Choose REST for public APIs, any consumer you don't control, browser-facing services, or when onboarding simplicity matters more than raw performance.
FAQ · 5 QUESTIONS

Frequently Asked Questions

01
When should I use gRPC instead of REST?
02
Is gRPC faster than REST?
03
Can gRPC and REST coexist in the same system?
04
Does gRPC work in browsers?
05
What's the biggest pitfall when implementing gRPC streaming in production?
🔥

That's Components. Mark it forged?

3 min read · try the examples if you haven't

Previous
What is a Browser Cache? How It Works and When It Breaks Things
18 / 18 · Components
Next
Microservices Architecture