Fetch API and AJAX in JavaScript: How to Load Data Without Reloading the Page
- fetch() resolves on any server response — always check response.ok or response.status before parsing the body. A 404 or 500 response that is parsed as valid data is a silent failure, and silent failures are worse than visible errors.
- The two-step parse exists because the response body is a ReadableStream — always await
response.json(). Missing await is silent at runtime and produces a Promise where downstream code expects an object. - Separate data-fetching and DOM-rendering into independent functions — the separation is a testing strategy, not just code organization. Functions that both fetch and render require a live DOM and a live server to test.
- Fetch API is the modern Promise-based replacement for XMLHttpRequest — cleaner syntax, native async/await support
- fetch() resolves its Promise on ANY server response including 404/500 — only rejects on network failure
- You must check response.ok (true for 200-299) manually — this is the #1 production gotcha
- response.json() returns a Promise — always await it, never assign without await
- POST requests require Content-Type: application/json header or the server returns 400/415
- Biggest mistake: assuming fetch() throws on HTTP errors — it does not, you must throw yourself
Fetch call fails silently — no error in console, no error state in UI, data looks wrong or stale
console.log(response.status, response.statusText, response.ok)const body = await response.text(); console.log(body)POST request returns 400 Bad Request or 415 Unsupported Media Type
console.log(JSON.stringify(postData)) // Confirm it produces a JSON string, not undefinedconsole.log(requestOptions.headers) // Confirm 'Content-Type': 'application/json' is presentresponse.json() returns a Promise object instead of parsed data — TypeError on property access
const raw = await response.text(); console.log(typeof raw, raw.slice(0, 200))const data = JSON.parse(raw); console.log(typeof data, Object.keys(data))Race condition — user triggers multiple requests and responses arrive out of order, UI shows wrong data
const controller = new AbortController(); fetch(url, { signal: controller.signal })controller.abort() // Call this before starting the replacement requestProduction Incident
fetch() Promise resolved — not rejected — because the server did respond. It just responded with a 503.
The code had no response.ok check. It called response.json() on the 503 response body, which was a JSON error object the upstream API returns during maintenance: { "status": "unavailable", "message": "Service temporarily unavailable" }. There was no 'data' field in this error object.
The rendering code checked for the presence of a 'data' field and treated its absence as 'no new data available since last fetch' — a valid state that caused it to silently retain the previous render. The dashboard showed 6-hour-old metrics with no visual indicator of staleness until someone manually opened the browser DevTools and noticed every API call was returning 503.HTTP ${response.status}: ${response.statusText}). Built a centralized fetch wrapper that enforces the ok check and logs all non-2xx responses to the error tracking service with the full response body. Added a 'last successfully updated' timestamp to the dashboard UI rendered below every data visualization — users can now detect stale data visually without opening DevTools. Added a lightweight health check that polls the upstream API every 30 seconds and displays a banner when the upstream is unhealthy.Production Debug GuideCommon symptoms of fetch misuse in production — with diagnosis steps, not just observations
response.json(). The body parsing step is also asynchronous. Change const data = response.json() to const data = await response.json(). If you are using ESLint, add the no-floating-promises rule to catch this at lint time before it reaches production.Promise.all() for independent requests. Only use sequential await when one request genuinely depends on the result of the previous.Every time you scroll Twitter and new posts appear without a page refresh, search Google and see suggestions drop as you type, or add something to your Amazon cart without being yanked to a new page — that is AJAX at work. It is one of the most visible features of the modern web, and understanding it deeply separates developers who build reactive, professional applications from those who are still forcing full page reloads for every user interaction.
Before AJAX, every action that needed server data meant a full round-trip: the browser requested a new HTML page, the server built it from scratch, and the user stared at a blank screen for a second or two. AJAX (Asynchronous JavaScript and XML — though JSON replaced XML years ago in practice) solved this by letting JavaScript make HTTP requests in the background while the rest of the page kept running. The Fetch API is the modern, Promise-based way to do exactly that — cleaner, more readable, and far more composable than its predecessor, XMLHttpRequest.
In this guide we cover why the Fetch API exists and what it replaced, how to make GET and POST requests with production-grade error handling, the streaming architecture that explains why response parsing is asynchronous, and how to build a working mini-app that fetches live data without the subtle async mistakes that silently break production code at scale.
Why XMLHttpRequest Existed — and Why Fetch Replaced It
To appreciate why Fetch exists, you need to feel the pain it solved. XMLHttpRequest was introduced by Microsoft in Internet Explorer 5 around 1999 and later standardized by the W3C. It gave JavaScript the ability to talk to a server without forcing a full page reload — genuinely revolutionary at the time. Gmail, Google Maps, and the first wave of dynamic web applications were built on it.
But XHR's API design reflects the era it came from. You create an object, attach separate event handlers for different request states, open a connection, and then send — all before you receive a single byte back. Error handling requires checking both readyState and status in the same callback. Nesting multiple XHR calls produces deeply indented callback chains that are nearly impossible to read, test, or debug six months later when something breaks at 2am.
The Fetch API, shipped in Chrome 42 in 2015 and now supported in every modern browser and Node.js 18+, uses Promises natively. This unlocks the entire Promise composition ecosystem: async/await for readable linear code, Promise.all for parallel requests, Promise.race for timeout patterns, and AbortController for cancellation. None of these work cleanly with XHR without substantial custom wrapper code.
In production, the migration from XHR to Fetch is not just about cleaner syntax. It is about composability and testability. XHR callbacks cannot be chained without nesting, cannot be raced with Promise.race without wrapping, and cannot be aborted cleanly without a non-trivial implementation. Fetch supports all of these natively, which means less custom infrastructure code to maintain and fewer places for subtle bugs to hide.
/** * io.thecodeforge: Comparing Legacy XHR vs Modern Fetch * * Both examples fetch the same user from the same endpoint. * Compare the error handling surface area — XHR requires * manual readyState + status checks in every callback. * Fetch centralizes the check with response.ok. */ // ─── THE OLD WAY: XMLHttpRequest ───────────────────────────────────────────── // Verbose, event-driven, impossible to compose with Promise.all or async/await const xhr = new XMLHttpRequest(); xhr.open('GET', 'https://jsonplaceholder.typicode.com/users/1'); xhr.onreadystatechange = function () { // readyState 4 = DONE, status 200 = OK // You must check both — readyState 4 includes error states if (xhr.readyState === 4) { if (xhr.status === 200) { const user = JSON.parse(xhr.responseText); console.log('XHR result:', user.name); } else { // HTTP errors require manual status checking here console.error('XHR HTTP error:', xhr.status, xhr.statusText); } } }; xhr.onerror = function () { // Network error — separate handler required console.error('XHR network error — no connection to server'); }; xhr.send(); // ─── THE MODERN WAY: Fetch API ──────────────────────────────────────────────── // Promise-based, composable with async/await, AbortController, Promise.all async function fetchUser(userId) { try { const response = await fetch( `https://jsonplaceholder.typicode.com/users/${userId}` ); // response.ok is true for 200-299 only // fetch() does NOT throw on 404 or 500 — you must check if (!response.ok) { throw new Error(`HTTP ${response.status}: ${response.statusText}`); } // Body parsing is also async — always await const user = await response.json(); console.log('Fetch result:', user.name); return user; } catch (error) { // Catches both network errors AND the manual throw above console.error('Fetch error:', error.message); return null; } } fetchUser(1);
How Fetch Really Works — Promises, Response Objects, and the Two-Step Parse
Here is the thing that trips up almost every developer the first time, and trips up experienced developers when they are moving fast: fetch() resolves its Promise as soon as the server responds with headers — even if the status code is 404 or 500. The Promise only rejects on a true network failure: DNS lookup failure, no internet connection, CORS rejection before the server is reached, or the connection being dropped mid-flight.
When fetch() resolves, you get a Response object — an envelope. The envelope has arrived, but you have not opened it yet. You call .json() on that envelope to read and parse the contents, and that parsing step also returns a Promise because reading the body stream takes time. Using async/await flattens this two-step process into readable linear code. Skipping await on either step gives you a Promise where you expected a value.
The two-step architecture exists because the Response body is a ReadableStream. The browser does not buffer the entire response into memory when fetch() resolves — it streams chunks as they arrive from the network. Calling response.json() reads the stream to completion and parses the accumulated bytes as JSON. This streaming design is what enables large file downloads without consuming proportional RAM, and it is why response.text(), response.blob(), and response.arrayBuffer() are all also asynchronous — they all read the same underlying stream.
One important consequence: you can only read the body stream once. If you call response.json() and then try to call response.text() on the same response, the second call fails because the stream has already been consumed. If you need the raw body and the parsed version, call response.text() first, then JSON.parse() the result manually. This is useful during debugging when you want to log the raw response before parsing it.
/** * io.thecodeforge: Production-grade Fetch wrapper pattern * * This wrapper centralizes four concerns that should never * be scattered across individual call sites: * 1. response.ok check (HTTP error detection) * 2. Body parsing (always async) * 3. Error logging (route to monitoring service) * 4. Request timeout (AbortController + setTimeout) */ const DEFAULT_TIMEOUT_MS = 10_000; // 10 seconds /** * Centralized fetch wrapper. * Use this instead of raw fetch() throughout the application. * Throws on both network errors and HTTP errors. */ async function apiFetch(url, options = {}) { const controller = new AbortController(); const timeoutId = setTimeout( () => controller.abort(), options.timeout ?? DEFAULT_TIMEOUT_MS ); try { const response = await fetch(url, { ...options, signal: controller.signal, }); // CRITICAL: fetch() does NOT throw on 404/500/503 // response.ok is true for status codes 200-299 only if (!response.ok) { // Read the error body for additional context // response.text() avoids a double-parse failure if body is not valid JSON const errorBody = await response.text(); const error = new Error( `HTTP ${response.status}: ${response.statusText}` ); error.status = response.status; error.body = errorBody; throw error; } // Body parsing is the second async step — the stream must be read return await response.json(); } catch (error) { if (error.name === 'AbortError') { // Could be our timeout or an external abort signal throw new Error(`Request timed out after ${options.timeout ?? DEFAULT_TIMEOUT_MS}ms`); } // Re-throw with context — let the caller decide how to handle throw error; } finally { clearTimeout(timeoutId); } } /** * Application-level function using the wrapper. * Business logic is clean — no fetch internals visible here. */ async function fetchUserProfile(userId) { try { const user = await apiFetch( `https://jsonplaceholder.typicode.com/users/${userId}` ); return { name: user.name, email: user.email, city: user.address.city, }; } catch (error) { // error.status is available for HTTP errors from apiFetch console.error( `[fetchUserProfile] Failed for userId=${userId}:`, error.message ); // In production: report to Sentry or Datadog here return null; } } fetchUserProfile(1).then(data => console.log(data));
fetch() in a single apiFetch(url, options) utility for your entire application. Put the response.ok check, body parsing, error logging, and timeout logic inside it. Every call site gets correct behavior automatically — no one can accidentally omit the ok check because the wrapper enforces it structurally. This is the single highest-leverage change you can make to a codebase that uses raw fetch() calls scattered across components.response.text() and then JSON.parse() manually.json()) exists because the body is a ReadableStream that must be consumed asynchronously. Always check response.ok before parsing. Always await response.json(). And always wrap fetch in a utility that enforces both — structural enforcement beats disciplinary enforcement every time.response.json() — standard happy pathresponse.text() or response.blob() instead of response.json() — calling response.json() on a non-JSON body throws a SyntaxErrorresponse.json() — headers are available as soon as fetch() resolves, before the body stream is readMaking POST Requests — Sending Data to a Server
GET requests retrieve data. POST requests send it. With Fetch, a POST request uses an options object that specifies the method, headers, and body. The body must be a string — Fetch does not serialize JavaScript objects automatically — so you serialize with JSON.stringify() before passing it.
You must set the Content-Type header to application/json when sending JSON. Without it, most modern backends — Express, FastAPI, Spring Boot, Rails — cannot determine the body format and return a 400 Bad Request or 415 Unsupported Media Type. The request fails, and the error message from the server often does not clearly say 'missing Content-Type', which sends developers chasing the wrong cause.
This is the most common POST bug in production, and it happens with a specific pattern: developers familiar with Axios switch to Fetch and forget that Axios automatically sets Content-Type to application/json when the body is an object. Fetch does not. This single behavioral difference between the two libraries accounts for a disproportionate number of API integration failures. The fix is always the same — add the header explicitly — but the debugging path is not always obvious.
A second concern unique to POST requests is double-submission. When a user submits a form and the network is slow, the button remains active and a second click fires a second identical request. Both requests reach the server, both create records, and the user has two orders, two accounts, or two payments. The prevention is straightforward: disable the submit button the moment the fetch starts, and re-enable it in the finally block — not the try block — so it re-enables regardless of whether the request succeeded or failed.
/** * io.thecodeforge: POST request with full production safety * * Three things this example enforces that most tutorials omit: * 1. Content-Type header — required, not auto-set by Fetch * 2. response.ok check — POST can return 400/422/500 silently * 3. Double-submission prevention — button disabled during in-flight request */ async function createBlogPost(postData, submitButton = null) { // Prevent double-submission: disable button before the request starts if (submitButton) submitButton.disabled = true; try { const response = await fetch('https://jsonplaceholder.typicode.com/posts', { method: 'POST', headers: { // Required — Fetch does NOT auto-set this like Axios does // Without it: server returns 400 Bad Request or 415 Unsupported Media Type 'Content-Type': 'application/json', // Authorization header — replace with your actual token mechanism // In production: read from a token store, not a hardcoded string 'Authorization': `Bearer ${getAuthToken()}`, }, // body must be a string — passing a raw object sends '[object Object]' body: JSON.stringify(postData), }); // POST requests can return 400/422/500 — check.ok here too if (!response.ok) { // Attempt to read server error body for better error messages let serverMessage = response.statusText; try { const errorData = await response.json(); serverMessage = errorData.message ?? errorData.error ?? serverMessage; } catch { // Server returned non-JSON error body — use statusText } throw new Error(`Failed to create post: HTTP ${response.status} — ${serverMessage}`); } const result = await response.json(); console.log('Resource created with ID:', result.id); return result; } catch (error) { console.error('[createBlogPost] POST error:', error.message); // In production: report to Sentry with postData context (redact PII first) return null; } finally { // Re-enable in finally — not try — so it re-enables on both success and failure if (submitButton) submitButton.disabled = false; } } function getAuthToken() { // In production: read from secure storage, not a hardcoded literal return sessionStorage.getItem('auth_token') ?? ''; } // Usage — pass the button reference for double-submission prevention const submitBtn = document.querySelector('#submit-post'); createPostButton.addEventListener('click', () => { createBlogPost( { title: 'Async JS in 2026', body: 'Fetch is the baseline.', userId: 1 }, submitBtn ); });
Fetching Data and Updating the DOM — A Real Mini-App
Theory only gets you so far. The real test is wiring everything together into a working pattern: a page that fetches a list of posts and renders them into the DOM, handles loading states, and recovers gracefully from errors without leaving the UI in a broken state.
The three-layer pattern — Data Layer, Render Layer, and Controller — is not organizational aesthetic. It is a testing strategy. You can unit-test the data layer with a mocked fetch that returns controlled payloads. You can unit-test the render layer with sample data without needing a network. You can integration-test the controller with both. If these concerns are interleaved — if a single function both fetches data and manipulates the DOM — testing requires a live DOM and a live server. That means slow, flaky, environment-dependent tests that developers stop trusting and eventually stop running.
Two common mistakes appear in almost every tutorial implementation. First: the loading spinner hide is placed inside the try block, which means a failed request leaves the spinner spinning indefinitely. Users see a loading indicator with no data and no error message — the worst possible state. The fix is the finally block. Second: multiple independent fetches are awaited sequentially when they could run in parallel. If your page needs user data, product data, and recommendation data on load, awaiting them one at a time means total load time is the sum of all three. Promise.all fires all three simultaneously and waits for the slowest one — total time is the maximum, not the sum.
/** * io.thecodeforge: Three-layer fetch + DOM pattern * * Data Layer: fetchPosts() — knows how to talk to the API * Render Layer: renderPosts() — knows how to update the DOM * Controller: initApp() — orchestrates them, manages UI state * * Each layer is independently testable. * fetchPosts can be tested with a mocked fetch. * renderPosts can be tested with sample data and a real DOM node. * initApp integration test uses both. */ // ─── DATA LAYER ─────────────────────────────────────────────────────────────── async function fetchPosts(limit = 3) { const response = await fetch( `https://jsonplaceholder.typicode.com/posts?_limit=${limit}` ); if (!response.ok) { throw new Error(`Failed to load posts: HTTP ${response.status}`); } return response.json(); } // ─── RENDER LAYER ──────────────────────────────────────────────────────────── function renderPosts(posts, container) { if (!posts.length) { container.innerHTML = '<p class="empty-state">No posts available.</p>'; return; } // Template literals — sanitize user-generated content in production // Use DOMPurify or textContent assignment for untrusted data container.innerHTML = posts .map( post => ` <article class="card" data-post-id="${post.id}"> <h4 class="card__title">${post.title}</h4> <p class="card__body">${post.body}</p> <footer class="card__meta">Post #${post.id}</footer> </article> ` ) .join(''); } function renderError(message, container) { container.innerHTML = ` <div class="error-state" role="alert"> <p>⚠ ${message}</p> <button onclick="initApp()">Retry</button> </div> `; } function setLoadingState(isLoading, container, spinner) { // Spinner and container managed separately for accessibility spinner.hidden = !isLoading; spinner.setAttribute('aria-busy', String(isLoading)); if (isLoading) container.innerHTML = ''; } // ─── CONTROLLER ─────────────────────────────────────────────────────────────── async function initApp() { // In a real browser environment: // const container = document.getElementById('post-list'); // const spinner = document.getElementById('spinner'); // const timestamp = document.getElementById('last-updated'); // Simulated DOM nodes for demonstration const container = { innerHTML: '', setAttribute: () => {} }; const spinner = { hidden: false, setAttribute: () => {} }; setLoadingState(true, container, spinner); try { const posts = await fetchPosts(3); renderPosts(posts, container); // Show last-updated timestamp — lets users detect stale data // timestamp.textContent = `Last updated: ${new Date().toLocaleTimeString()}`; console.log('[App] Posts loaded at', new Date().toLocaleTimeString()); } catch (error) { renderError(error.message, container); console.error('[App] Failed to load posts:', error.message); // In production: report to Sentry with request context } finally { // CRITICAL: always in finally — not try // A failed request must hide the spinner too setLoadingState(false, container, spinner); } } initApp(); // ─── PARALLEL FETCH PATTERN ─────────────────────────────────────────────────── // When a page needs multiple independent data sources on load, // use Promise.all — not sequential awaits. // // Sequential (slow — total time = sum of all requests): // const users = await fetchUsers(); // const posts = await fetchPosts(); // const comments = await fetchComments(); // // Parallel (fast — total time = slowest single request): async function initDashboard() { try { const [users, posts, comments] = await Promise.all([ fetch('https://jsonplaceholder.typicode.com/users?_limit=5').then(r => { if (!r.ok) throw new Error(`Users: HTTP ${r.status}`); return r.json(); }), fetch('https://jsonplaceholder.typicode.com/posts?_limit=5').then(r => { if (!r.ok) throw new Error(`Posts: HTTP ${r.status}`); return r.json(); }), fetch('https://jsonplaceholder.typicode.com/comments?_limit=5').then(r => { if (!r.ok) throw new Error(`Comments: HTTP ${r.status}`); return r.json(); }), ]); console.log('Dashboard data loaded:', { users: users.length, posts: posts.length, comments: comments.length, }); } catch (error) { // Promise.all rejects on the first failure // Use Promise.allSettled() if you want partial success console.error('[Dashboard] Parallel fetch failed:', error.message); } } initDashboard();
Promise.all() for parallel fetches. Use Promise.allSettled() if partial success is acceptable and you want to render whatever succeeded| Feature / Aspect | XMLHttpRequest (XHR) | Fetch API |
|---|---|---|
| Syntax style | Callback-based, event-driven, verbose — attach handlers before sending | Promise-based, async/await compatible, linear code structure |
| Error handling | Manual readyState + status check in onreadystatechange, separate onerror handler for network failures | Checks response.ok for HTTP errors; Promise only rejects on network failure — must check ok manually |
| Response parsing | Manual JSON.parse(xhr.responseText) — synchronous, available when readyState === 4 | Built-in response.json() — asynchronous, returns a Promise, must be awaited |
| Request cancellation | xhr.abort() — simple but not composable with async/await patterns | AbortController signal — reusable, composable, works cleanly with async/await |
| Composability | Cannot participate in Promise.all, Promise.race, or async/await without custom wrappers | Native Promise — composes directly with all Promise combinators and async/await |
| Upload progress | Supported via xhr.upload.onprogress — shows bytes uploaded in real time | Not supported — known Fetch API limitation with no current spec solution |
| Request timeout | Built-in xhr.timeout property — simple integer in milliseconds | No built-in timeout — implement with AbortController + setTimeout, clear in finally |
| Cookie handling | xhr.withCredentials = true to include cookies on cross-origin requests | credentials: 'include' in the options object for cross-origin cookie inclusion |
🎯 Key Takeaways
- fetch() resolves on any server response — always check response.ok or response.status before parsing the body. A 404 or 500 response that is parsed as valid data is a silent failure, and silent failures are worse than visible errors.
- The two-step parse exists because the response body is a ReadableStream — always await
response.json(). Missing await is silent at runtime and produces a Promise where downstream code expects an object. - Separate data-fetching and DOM-rendering into independent functions — the separation is a testing strategy, not just code organization. Functions that both fetch and render require a live DOM and a live server to test.
- The Content-Type header is not optional for POST requests — it defines the contract between client and server about how the body is encoded. Fetch does not auto-set it. Always set it explicitly.
- The finally block is the only correct place for UI state resets — spinner hiding, button re-enabling, loading flag clearing. A try block that handles success but not failure leaves users in broken states.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhy does the Fetch API's promise resolve even when the server returns a 404 or 500 status code?JuniorReveal
- QExplain the difference between a 'Network Error' and an 'HTTP Error' in the context of the Fetch API.Mid-levelReveal
- QWhy does
response.json()return a Promise instead of the parsed object directly?Mid-levelReveal - QHow do you handle a race condition where multiple fetch requests are fired in sequence and the responses arrive out of order?SeniorReveal
- QWhat is an AbortController and how would you use it to cancel a fetch request when a user navigates away from a page?JuniorReveal
Frequently Asked Questions
Is Fetch better than Axios?
Fetch is built into every modern browser and Node.js 18+ — no dependency, no bundle size impact, no version management. For straightforward GET and POST requests with manual error handling, Fetch is the right default.
Axios adds features that Fetch does not have out of the box: automatic JSON serialization and Content-Type setting, request and response interceptors for global auth header injection, automatic throws on non-2xx status codes, and built-in request cancellation with cleaner syntax. For complex applications that need interceptors — adding auth tokens to every request, logging every response, retrying on 429 rate limit — Axios pays for its dependency.
For new projects: start with Fetch wrapped in a utility function. Add Axios if you find yourself rebuilding interceptor logic. Do not add Axios as a default dependency before you know you need it.
How do I handle timeouts with Fetch?
Fetch has no built-in timeout property. Implement it with AbortController and setTimeout:
```javascript async function fetchWithTimeout(url, options = {}, timeoutMs = 10000) { const controller = new AbortController(); const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
try { const response = await fetch(url, { ...options, signal: controller.signal, }); return response; } catch (error) { if (error.name === 'AbortError') { throw new Error(Request timed out after ${timeoutMs}ms); } throw error; } finally { clearTimeout(timeoutId); // Always clear to prevent memory leaks } } ```
Clear the timeout in the finally block — if you only clear on success, a failed request leaves the timeout running until it fires and aborts a request that has already completed.
Can I use Fetch in a Node.js environment?
Yes — as of Node.js 18, the Fetch API is available as a global without any import. Node.js 21 stabilized the implementation further. For Node.js 16 and earlier, install the node-fetch package (npm install node-fetch) and import it explicitly.
One behavioral difference worth noting: Node.js Fetch does not handle cookies the same way browser Fetch does — the browser's cookie jar is not available in Node. For server-side requests that need cookie management, you will need to handle Cookie headers manually or use a library like got or undici that provides native Node.js HTTP handling.
How do I cancel a fetch request when a React component unmounts?
Create an AbortController inside useEffect and call abort() in the cleanup function. The cleanup runs when the component unmounts and also before the effect re-runs when dependencies change:
```javascript useEffect(() => { const controller = new AbortController();
async function loadData() { try { const response = await fetch(url, { signal: controller.signal }); if (!response.ok) throw new Error(HTTP ${response.status}); const data = await response.json(); setData(data); } catch (error) { // AbortError is expected on unmount — do not set error state if (error.name !== 'AbortError') { setError(error.message); } } }
loadData();
return () => controller.abort(); // Cleanup: abort on unmount or url change }, [url]); ```
This prevents the React warning about state updates on unmounted components and cancels wasted bandwidth on requests whose results will never be rendered.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.