GraphQL vs REST: Which API Architecture Should You Actually Use?
Every modern app lives or dies by its API. Whether you're building a mobile app that has to load fast on a 3G connection in rural Brazil, or a dashboard that needs to stitch together data from five different services, the API architecture you choose on day one will haunt you — or reward you — for years. GraphQL and REST are the two dominant players in that space, and the choice between them is one of the most hotly debated decisions in backend engineering.
REST has been the industry standard for over two decades. It's predictable, HTTP-native, and every developer on earth has used it. But as UIs grew more complex and mobile clients became first-class citizens, REST started showing its cracks. Teams were hitting endpoints and getting mountains of data they didn't need, or worse, firing off six requests just to render a single screen. GraphQL was Facebook's answer to that pain, built internally in 2012 and open-sourced in 2015.
By the end of this article you'll be able to clearly explain the structural difference between REST and GraphQL, identify the specific scenarios where each one wins, spot the classic mistakes teams make when choosing between them, and walk into a system design interview with a confident, nuanced answer instead of a vague 'it depends.'
How REST Actually Works — and Where It Starts to Break
REST (Representational State Transfer) organizes your API around resources. A resource is a noun — a User, an Order, a Product. You expose that resource at a URL, and HTTP verbs tell the server what to do with it: GET to read, POST to create, PUT/PATCH to update, DELETE to remove. This mapping is clean, intuitive, and stateless by design.
The trouble starts when your UI needs don't map neatly onto individual resources. Say you're building a Twitter-style profile page. You need the user's name and avatar, their last three tweets, and their follower count. In REST, that's three separate endpoints: GET /users/:id, GET /users/:id/tweets, GET /users/:id/followers. Three round trips. On mobile, each round trip is expensive.
The other side of that coin is overfetching. GET /users/:id might return 40 fields — date of birth, billing address, preferences, internal flags — when your profile page only needs four of them. You're downloading data you'll throw away, every single time.
These two problems — underfetching (too few fields, too many requests) and overfetching (too many fields, wasted bandwidth) — are not bugs in REST. They're structural consequences of organizing an API around fixed resource shapes instead of around what the client actually needs.
# Scenario: Render a user profile page using a typical REST API. # We need: user name + avatar, their last 3 tweets, follower count. # That means 3 separate HTTP requests. # --- Request 1: Get the user's basic info --- # Returns ~40 fields. We only need 'name' and 'avatar_url'. curl https://api.example.com/users/42 # Response (truncated to show the overfetching problem): # { # "id": 42, # "name": "Maya Patel", <-- we need this # "avatar_url": "https://...", <-- we need this # "email": "maya@example.com", <-- not needed on this page # "date_of_birth": "1990-03-15", <-- not needed on this page # "billing_address": {...}, <-- definitely not needed # "account_flags": [...], <-- internal — should not even be here! # ... 34 more fields # } # --- Request 2: Get the user's recent tweets --- # A separate round trip just to get 3 tweets. curl https://api.example.com/users/42/tweets?limit=3 # Response: # [ { "id": 101, "body": "...", "likes": 14 }, ... ] # --- Request 3: Get follower count --- # Yet another round trip for a single number. curl https://api.example.com/users/42/followers/count # Response: # { "follower_count": 8321 } # TOTAL: 3 HTTP requests, ~40 wasted fields downloaded and discarded. # On a slow mobile connection, these requests are sequential or require # client-side orchestration to run in parallel — adding code complexity.
Response body for /users/42 is ~2.1 KB. Actual data used: ~180 bytes.
Overfetch ratio: ~11x more data downloaded than needed.
How GraphQL Solves Overfetching — and the Trade-offs It Introduces
GraphQL flips the model on its head. Instead of the server deciding what data a response contains, the client declares exactly what it needs in a query. The server exposes a single endpoint (typically POST /graphql) and a typed schema that describes every piece of data available. The client sends a query document describing the shape it wants, and the server returns exactly that shape — nothing more, nothing less.
That same profile page that needed three REST requests? One GraphQL query handles it. The client asks for user name, avatar, their last three tweets, and follower count in a single request. The server resolves all of it and returns one response shaped exactly like the query.
But GraphQL isn't a free lunch. That flexibility comes with real costs. Caching gets harder — REST leverages HTTP caching at the URL level natively, while GraphQL's single endpoint makes URL-based caching useless by default (you need tools like Apollo Client's normalized cache or persisted queries). Query complexity is another concern: a malicious or poorly-written client can craft a deeply nested query that brings your database to its knees. You need query depth limiting and complexity analysis in production. The learning curve for schema design and resolvers is also steeper than standing up REST routes.
# The same profile page data — now in a single GraphQL query. # The client is 100% in control of the shape of the response. query GetUserProfile($userId: ID!, $tweetLimit: Int!) { # Fetch the user by ID user(id: $userId) { # Ask for ONLY the fields this page actually renders name avatarUrl # Nested query for tweets — limit is passed as a variable tweets(limit: $tweetLimit) { id body likeCount } # A computed field on the User type — resolved server-side followerCount } } # Variables sent alongside the query (not hardcoded in the query string) # { # "userId": "42", # "tweetLimit": 3 # } # --- What the server returns --- # { # "data": { # "user": { # "name": "Maya Patel", # "avatarUrl": "https://cdn.example.com/avatars/42.jpg", # "tweets": [ # { "id": "101", "body": "Just shipped a new feature!", "likeCount": 14 }, # { "id": "98", "body": "GraphQL resolvers are wild.", "likeCount": 31 }, # { "id": "95", "body": "Coffee > sleep.", "likeCount": 7 } # ], # "followerCount": 8321 # } # } # } # TOTAL: 1 HTTP request. Response contains EXACTLY the fields requested. # Response body: ~310 bytes. Zero wasted data.
Response body: ~310 bytes — matches exactly what the UI needs.
No overfetching. No underfetching. No client-side orchestration needed.
Real-World Decision Framework — When to Actually Pick Each One
The honest answer isn't 'GraphQL is better' or 'REST is simpler.' The right answer depends on the shape of your problem. Here's how senior engineers actually think about this decision.
Choose REST when your data model is simple and resource-oriented, your clients are few and controlled (e.g., internal services talking to each other), you need aggressive HTTP caching (CDN caching of GET endpoints is trivially easy with REST), or your team is small and you want zero-overhead tooling. Public APIs that third-party developers will consume are also often better served by REST — it's more universally understood and doesn't require GraphQL client libraries.
Choose GraphQL when you have multiple client types with different data needs — a mobile app, a web app, and a third-party integration all hitting the same backend. It's the perfect fit for a BFF (Backend for Frontend) layer. Also choose it when your product is iteration-heavy: adding a new field to a GraphQL schema is non-breaking by default, whereas adding a new REST endpoint requires versioning discussions. Rapid product teams love this.
The hybrid approach is increasingly common: REST for simple CRUD microservices talking to each other, GraphQL at the edge as an API gateway that stitches them together for clients. This is the architecture used by major platforms like GitHub (which offers both APIs) and Shopify.
// A minimal but production-aware GraphQL server using Apollo Server. // Demonstrates: schema definition, resolvers, and the depth-limiting // protection you MUST add before going to production. const { ApolloServer, gql } = require('apollo-server'); const depthLimit = require('graphql-depth-limit'); // npm install graphql-depth-limit // --- Step 1: Define the Schema --- // The schema is a contract between server and client. // Every field, type, and relationship is declared here. const typeDefs = gql` type Tweet { id: ID! body: String! likeCount: Int! } type User { id: ID! name: String! avatarUrl: String! followerCount: Int! # The 'limit' argument lets the client control how many tweets to fetch tweets(limit: Int = 10): [Tweet!]! } type Query { user(id: ID!): User } `; // --- Step 2: Mock data (replace with DB calls in production) --- const USERS = { '42': { id: '42', name: 'Maya Patel', avatarUrl: 'https://cdn.example.com/avatars/42.jpg', followerCount: 8321, }, }; const TWEETS_BY_USER = { '42': [ { id: '101', body: 'Just shipped a new feature!', likeCount: 14 }, { id: '98', body: 'GraphQL resolvers are wild.', likeCount: 31 }, { id: '95', body: 'Coffee > sleep.', likeCount: 7 }, { id: '91', body: 'Depth limits save lives.', likeCount: 22 }, ], }; // --- Step 3: Resolvers --- // Each resolver is a function that returns data for one field in the schema. // Apollo walks the query tree and calls the right resolver for each field. const resolvers = { Query: { // Called when client queries: user(id: "42") { ... } user: (parent, args) => { const foundUser = USERS[args.id]; if (!foundUser) throw new Error(`User ${args.id} not found`); return foundUser; }, }, User: { // Called for each User object to resolve its 'tweets' field // 'parent' is the User object returned by the Query.user resolver tweets: (parent, args) => { const allTweets = TWEETS_BY_USER[parent.id] || []; // Respect the 'limit' argument the client passed in return allTweets.slice(0, args.limit); }, }, }; // --- Step 4: Server with depth limiting --- // Without this, a client could send a query like: // { user { followers { following { tweets { author { followers { ... } } } } } } } // ...and recursively query until your DB cries. const server = new ApolloServer({ typeDefs, resolvers, validationRules: [ depthLimit(5), // Reject any query nested deeper than 5 levels ], }); server.listen({ port: 4000 }).then(({ url }) => { console.log(`GraphQL API ready at ${url}`); });
# Send this query via curl or Apollo Sandbox:
# query { user(id: "42") { name avatarUrl followerCount tweets(limit: 3) { body likeCount } } }
#
# Response:
# {
# "data": {
# "user": {
# "name": "Maya Patel",
# "avatarUrl": "https://cdn.example.com/avatars/42.jpg",
# "followerCount": 8321,
# "tweets": [
# { "body": "Just shipped a new feature!", "likeCount": 14 },
# { "body": "GraphQL resolvers are wild.", "likeCount": 31 },
# { "body": "Coffee > sleep.", "likeCount": 7 }
# ]
# }
# }
# }
| Feature / Aspect | REST | GraphQL |
|---|---|---|
| Data fetching model | Server defines fixed response shapes per endpoint | Client declares exactly what fields it needs |
| Number of endpoints | Many (one per resource/action) | One (typically POST /graphql) |
| Overfetching | Common — endpoint returns all fields regardless | Eliminated — client requests only what it needs |
| Underfetching / N requests | Common — often needs multiple round trips | Solved — nested queries fetch related data in one request |
| HTTP caching | Native and trivial (GET requests cache at URL level) | Requires extra tooling (persisted queries, Apollo cache) |
| Versioning | Requires versioning (/v1/, /v2/) as schema evolves | Non-breaking by default — add fields, deprecate old ones |
| Error handling | HTTP status codes (200, 404, 500) are the standard | Always returns HTTP 200; errors live inside the response body |
| Learning curve | Low — every dev knows GET/POST/PUT/DELETE | Medium — schema design, resolvers, and DataLoader take time |
| Tooling ecosystem | Mature — Swagger/OpenAPI, Postman, curl | Rich — Apollo, GraphiQL, code generation, introspection |
| Best for | Simple CRUD, public APIs, microservice-to-microservice | Complex clients, mobile apps, rapid iteration, BFF pattern |
| Real-world adopters | Twitter v1, Stripe, most microservices | GitHub v4, Shopify Storefront, Facebook, Airbnb |
🎯 Key Takeaways
- REST's core weakness isn't the protocol — it's that fixed resource shapes create a mismatch between what the server returns and what different clients actually need. GraphQL solves this by inverting control to the client.
- GraphQL's single endpoint breaks HTTP-layer caching. In REST, a CDN can cache GET /products/42 trivially. With GraphQL you need query-level caching strategies like Apollo's persisted queries or a normalized client-side cache.
- The N+1 problem is GraphQL's most dangerous production pitfall. Every GraphQL server resolving lists of related data must implement DataLoader batching, or a query for 100 users will silently fire 101 database queries.
- The smartest production architecture often uses both: REST for internal microservice communication (simple, fast, easy to cache) and a GraphQL gateway at the edge for client-facing APIs where different clients need different data shapes.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Using GraphQL for microservice-to-microservice communication — Symptom: Teams add a full GraphQL layer between internal services, adding schema overhead and resolver complexity where a simple REST or gRPC call would be 10x faster and easier. Fix: GraphQL shines at the client-facing edge (your API gateway or BFF). Between internal services that you control, REST or gRPC is almost always the right call. GraphQL's flexibility is only valuable when the consumer's needs are unpredictable — internal services usually have exactly one consumer with known needs.
- ✕Mistake 2: Ignoring the N+1 query problem in GraphQL resolvers — Symptom: A query for 50 users and their tweets fires 51 database queries, and your API that seemed fast in development becomes dangerously slow under real traffic. You might not even notice until you load test it. Fix: Install and configure DataLoader from day one. Wrap every resolver that fetches a related entity in a DataLoader batch function. It batches all the individual ID lookups from a single request into one SQL IN (...) query. This is non-negotiable in production GraphQL.
- ✕Mistake 3: Assuming REST endpoints always need versioning — Symptom: Teams pre-emptively build /v1/ into every REST API and then maintain parallel endpoint versions forever, even when changes are purely additive. Fix: Additive changes (new fields in a response, new optional query params) don't require versioning in REST either. Version only when you need to make a breaking change — removing a field, changing a field's type, or restructuring the response shape. GraphQL makes this easier with the @deprecated directive, but disciplined REST teams can avoid version sprawl too.
Interview Questions on This Topic
- QYou're designing the API layer for an e-commerce platform with an iOS app, an Android app, a web storefront, and a third-party affiliate integration. Each client needs different subsets of product data. Walk me through how you'd decide between REST and GraphQL, and what architecture you'd land on.
- QGraphQL always returns HTTP 200 even when an error occurs. How does error handling work differently in GraphQL vs REST, and what are the implications for monitoring and alerting in production?
- QA colleague says 'GraphQL is always better than REST because it eliminates overfetching.' What's wrong with that statement, and what specific scenarios would you push back with?
Frequently Asked Questions
Is GraphQL faster than REST?
Not inherently — GraphQL can actually be slower if resolvers aren't optimized, especially due to the N+1 problem. GraphQL can feel faster on the client side because it reduces the number of round trips and eliminates downloading unused data, but the server-side work of resolving a complex query can be more expensive than a simple REST endpoint hitting one database table. Speed depends almost entirely on how well your resolvers are implemented.
Can I use GraphQL and REST together in the same application?
Absolutely — and many production systems do exactly this. A common pattern is to run REST APIs between your internal microservices (where HTTP caching, simplicity, and speed matter most) while exposing a single GraphQL gateway to external clients like mobile apps and web frontends. This gives you the best of both: REST's cacheability and simplicity internally, GraphQL's flexibility externally.
Why does GraphQL always return HTTP 200 even for errors?
Because a single GraphQL request can partially succeed — it might successfully resolve three of the four fields you asked for and fail on the fourth. A single HTTP status code can't represent that nuance. Instead, GraphQL returns HTTP 200 with a response body that contains both a 'data' key (with whatever succeeded) and an 'errors' array (with structured error details for what failed). This means your monitoring must parse response bodies, not just watch for non-200 HTTP codes — a critical operational difference from REST.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.