Home JavaScript React Performance Optimisation: memoisation, lazy loading and the reconciler explained

React Performance Optimisation: memoisation, lazy loading and the reconciler explained

In Plain English 🔥
Imagine a restaurant kitchen where every time one customer changes their order, the chef throws away every dish on the pass and starts cooking the entire menu again from scratch. That's what an un-optimised React app does — it re-renders every component even if nothing about it changed. React performance optimisation is the set of techniques that tell the chef: 'Table 4 only changed their dessert — leave the starters alone.' The goal isn't to make your code faster in theory; it's to stop React doing work it doesn't need to do.
⚡ Quick Answer
Imagine a restaurant kitchen where every time one customer changes their order, the chef throws away every dish on the pass and starts cooking the entire menu again from scratch. That's what an un-optimised React app does — it re-renders every component even if nothing about it changed. React performance optimisation is the set of techniques that tell the chef: 'Table 4 only changed their dessert — leave the starters alone.' The goal isn't to make your code faster in theory; it's to stop React doing work it doesn't need to do.

React's declarative model is a gift — you describe what the UI should look like and React figures out how to get there. But that abstraction has a cost. In a large production app with hundreds of components, deeply nested state, and real-time data flowing in from WebSockets, React can easily end up doing tens of thousands of unnecessary renders per second. Users notice this as janky animations, sluggish inputs and frames dropping below 60fps. That's when 'React is fast enough' stops being true.

The root cause is almost always the same: developers treat re-renders as free. They're not. Every re-render means React has to call your component function, build a new virtual DOM tree, diff it against the previous one (reconciliation), and then commit any changes to the real DOM. Most of the time, the diff shows nothing changed — yet you still paid the cost of the function call and the tree construction. The optimisation techniques in this article all share a single goal: give React the information it needs to skip that work entirely.

By the end of this article you'll understand exactly how the React reconciler decides what to re-render and why, when to reach for React.memo, useMemo and useCallback (and critically, when NOT to), how to use code splitting and virtualisation to handle scale, and the production gotchas that bite even experienced engineers. Every example is pulled from patterns we've seen in real codebases.

How the React reconciler actually decides what to re-render

Before you optimise anything, you need a mental model of what React is actually doing. When state or props change, React re-renders the component that owns that state — and by default, every child of that component re-renders too, regardless of whether their own props changed. This is called a cascading re-render and it's the single biggest source of preventable work in a React app.

React's reconciler (the Fiber architecture since React 16) works in two phases. The render phase is pure and interruptible — React calls your component functions and builds a Fiber tree representing the new UI. The commit phase is synchronous and side-effectful — React applies DOM mutations, runs layout effects, then passive effects. Only DOM nodes that actually changed get touched in the commit phase. But you still paid the full render phase cost for every component in the subtree.

The key insight is this: React uses referential equality (===) to decide whether props changed. A plain object literal {} created inside a parent component is a new reference every render, even if its contents are identical. That's why memoisation isn't just about expensive calculations — it's primarily about reference stability. Unstable references are the root cause of most unnecessary re-renders.

ReconcilerDemo.jsx · JAVASCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
import React, { useState } from 'react';

// A child component with a render log so we can see exactly when it fires
function ProductCard({ product, onAddToCart }) {
  // This log fires every time React calls this function — pay attention to how often it prints
  console.log(`[ProductCard] rendering: ${product.name}`);

  return (
    <div className="product-card">
      <h3>{product.name}</h3>
      <p>${product.price}</p>
      <button onClick={() => onAddToCart(product.id)}>Add to cart</button>
    </div>
  );
}

// The parent holds unrelated state — a search query the ProductCard doesn't care about
function ProductListPage() {
  const [searchQuery, setSearchQuery] = useState('');
  const [cartCount, setCartCount] = useState(0);

  // This object is recreated on every render of ProductListPage.
  // Even though the values haven't changed, it's a brand-new reference.
  const featuredProduct = { id: 1, name: 'Mechanical Keyboard', price: 149 };

  // Same problem: new function reference every render
  const handleAddToCart = (productId) => {
    console.log(`Adding product ${productId} to cart`);
    setCartCount(prev => prev + 1);
  };

  return (
    <div>
      <p>Cart: {cartCount} items</p>
      {/* Typing in this input triggers a re-render of ProductListPage,
          which causes ProductCard to re-render even though the product
          data and the handler haven't meaningfully changed */}
      <input
        value={searchQuery}
        onChange={e => setSearchQuery(e.target.value)}
        placeholder="Search products..."
      />
      <ProductCard
        product={featuredProduct}
        onAddToCart={handleAddToCart}
      />
    </div>
  );
}

export default ProductListPage;
▶ Output
// Type 'key' into the search input — watch the console:
[ProductCard] rendering: Mechanical Keyboard // initial render
[ProductCard] rendering: Mechanical Keyboard // typed 'k'
[ProductCard] rendering: Mechanical Keyboard // typed 'ke'
[ProductCard] rendering: Mechanical Keyboard // typed 'key'
// ProductCard re-rendered 3 times for state it doesn't own or care about
⚠️
Watch Out:Install the React DevTools browser extension and enable 'Highlight updates when components render'. Run it on your production app before reaching for any optimisation API. You'll almost certainly find components re-rendering 10x more than you expected — but now you'll know exactly where to look.

React.memo, useMemo and useCallback — what each one actually does

These three APIs are frequently used interchangeably by developers who've half-read the docs. They solve different problems and conflating them leads to both under-optimised and over-engineered code.

React.memo is a higher-order component that wraps a component and tells React: 'Only re-render this component if its props have changed.' It does a shallow equality check on the props object. If every prop passes ===, React reuses the last rendered output entirely — it doesn't even call your component function.

useCallback memoises a function reference. It returns the same function object across renders as long as its dependency array hasn't changed. Its primary job is to create stable references to pass as props to memoised children — without it, React.memo is nearly useless because a new function reference counts as a changed prop.

useMemo memoises the return value of a computation. Use it for expensive calculations that shouldn't run on every render, or to stabilise object and array references that get passed as props. The dependency array works identically to useCallback — the memoised value is only recomputed when a dependency changes.

The rule of thumb: if you're passing a callback to a memoised child, use useCallback. If you're passing a derived object or doing expensive maths, use useMemo. Don't use either just because you can.

MemoisedProductCard.jsx · JAVASCRIPT
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
import React, { useState, useCallback, useMemo } from 'react';

// React.memo wraps the component — React will now do a shallow props comparison
// before deciding whether to call this function again
const ProductCard = React.memo(function ProductCard({ product, onAddToCart }) {
  console.log(`[ProductCard] rendering: ${product.name}`);

  return (
    <div className="product-card">
      <h3>{product.name}</h3>
      {/* Derived display value — formatted inside the component because it's cheap */}
      <p>${product.price.toFixed(2)}</p>
      <button onClick={() => onAddToCart(product.id)}>Add to cart</button>
    </div>
  );
});

function ProductListPage() {
  const [searchQuery, setSearchQuery] = useState('');
  const [cartCount, setCartCount] = useState(0);
  const [inventory] = useState([
    { id: 1, name: 'Mechanical Keyboard', price: 149, category: 'peripherals' },
    { id: 2, name: 'USB-C Hub', price: 49, category: 'peripherals' },
    { id: 3, name: 'Monitor Arm', price: 89, category: 'accessories' },
  ]);

  // useMemo stabilises the object reference — same object is returned as long as
  // inventory[0] hasn't changed. Without this, React.memo on ProductCard is worthless
  // because {id:1, name:'...', price:149} would be a new object every render.
  const featuredProduct = useMemo(
    () => inventory.find(item => item.id === 1),
    [inventory] // only recompute if inventory array reference changes
  );

  // useMemo for a genuinely expensive calculation — filtering + sorting a large list
  const filteredInventory = useMemo(() => {
    console.log('[useMemo] recomputing filtered inventory');
    return inventory
      .filter(item =>
        item.name.toLowerCase().includes(searchQuery.toLowerCase())
      )
      .sort((a, b) => a.price - b.price);
  }, [inventory, searchQuery]); // recompute only when these two change

  // useCallback stabilises the function reference passed to the memoised ProductCard.
  // If we didn't use this, ProductCard would re-render on every keystroke anyway
  // because handleAddToCart would be a new function reference each time.
  const handleAddToCart = useCallback((productId) => {
    console.log(`Adding product ${productId} to cart`);
    setCartCount(prev => prev + 1); // using functional update avoids stale closures
  }, []); // no dependencies — setCartCount identity is stable

  return (
    <div>
      <p>Cart: {cartCount} items</p>
      <input
        value={searchQuery}
        onChange={e => setSearchQuery(e.target.value)}
        placeholder="Search products..."
      />

      {/* featuredProduct reference is stable — ProductCard will NOT re-render
          when searchQuery changes, because none of its props changed */}
      <ProductCard
        product={featuredProduct}
        onAddToCart={handleAddToCart}
      />

      <h2>All Products ({filteredInventory.length})</h2>
      {filteredInventory.map(product => (
        // Each card in the list also gets the stable handler
        <ProductCard
          key={product.id}
          product={product}
          onAddToCart={handleAddToCart}
        />
      ))}
    </div>
  );
}

export default ProductListPage;
▶ Output
// Initial render:
[useMemo] recomputing filtered inventory
[ProductCard] rendering: Mechanical Keyboard
[ProductCard] rendering: USB-C Hub
[ProductCard] rendering: Monitor Arm

// Type 'key' into the search input:
[useMemo] recomputing filtered inventory // searchQuery changed — correct
[ProductCard] rendering: Mechanical Keyboard // only this one matches 'key'
// USB-C Hub and Monitor Arm did NOT re-render — React.memo worked
// The featured ProductCard at the top also did NOT re-render
🔥
Interview Gold:Interviewers love asking 'when would you NOT use useMemo?' The honest answer: most of the time. Memoisation has overhead — it allocates memory for the cache, runs dependency comparisons, and adds cognitive load. For cheap calculations (string concatenation, simple arithmetic), the memoisation cost often exceeds the savings. Only reach for it when React DevTools shows a genuine problem.

Code splitting with React.lazy, Suspense and dynamic imports

Memoisation fights unnecessary re-renders. Code splitting fights the other performance killer: loading too much JavaScript upfront. In a typical React SPA, bundlers like Webpack or Vite produce a single JavaScript bundle. A user visiting your landing page downloads code for your admin dashboard, your analytics charts, and every other route — most of which they'll never touch. This bloats the initial bundle, delays Time to Interactive, and kills Lighthouse scores.

Code splitting lets you split your bundle into smaller chunks that load on demand. React.lazy and Suspense are the built-in APIs for this. React.lazy takes a function that calls a dynamic import() — a browser-native API that returns a Promise resolving to a module. React defers rendering that component until the module has loaded, and Suspense defines what to show in the meantime.

The sweet spot for lazy loading is route-level splitting — each page becomes its own chunk. But it's also valuable for heavy third-party components like rich text editors, PDF viewers, or chart libraries that aren't needed on initial load. The rule: if a user's critical path doesn't need it within the first three seconds, consider lazy loading it.

A critical gotcha: React.lazy only works with default exports. Named exports require a small wrapper. Also, always wrap lazy components high enough in the tree that the Suspense fallback doesn't cause layout shift.

AppWithCodeSplitting.jsx · JAVASCRIPT
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970
import React, { Suspense, lazy, useState } from 'react';

// The dynamic import() tells the bundler to put AnalyticsDashboard in a separate chunk.
// This module (and all its dependencies — chart libraries etc.) will only download
// when a user actually navigates to the analytics section.
const AnalyticsDashboard = lazy(() =>
  import('./pages/AnalyticsDashboard')
);

// Named export wrapper — React.lazy requires the Promise to resolve to
// { default: Component }, so we reshape named exports accordingly
const RichTextEditor = lazy(() =>
  import('./components/RichTextEditor').then(module => ({
    default: module.RichTextEditor // map named export to default
  }))
);

// A skeleton fallback that matches the approximate shape of the loading content.
// This prevents layout shift — use real dimensions matching the lazy component.
function DashboardSkeleton() {
  return (
    <div aria-busy="true" aria-label="Loading analytics dashboard">
      <div style={{ height: 48, background: '#e5e7eb', borderRadius: 8, marginBottom: 16 }} />
      <div style={{ height: 300, background: '#e5e7eb', borderRadius: 8 }} />
    </div>
  );
}

function App() {
  const [activeView, setActiveView] = useState('home');

  return (
    <div>
      <nav>
        <button onClick={() => setActiveView('home')}>Home</button>
        <button onClick={() => setActiveView('analytics')}>Analytics</button>
        <button onClick={() => setActiveView('editor')}>Editor</button>
      </nav>

      {activeView === 'home' && (
        <main>
          <h1>Welcome to the Dashboard</h1>
          <p>Select a section to get started.</p>
        </main>
      )}

      {/* Suspense boundary catches the lazy component's loading Promise.
          fallback renders while the chunk is downloading. */}
      {activeView === 'analytics' && (
        <Suspense fallback={<DashboardSkeleton />}>
          {/* AnalyticsDashboard chunk only downloads when this branch renders */}
          <AnalyticsDashboard dateRange="last-30-days" />
        </Suspense>
      )}

      {activeView === 'editor' && (
        <Suspense fallback={<div>Loading editor...</div>}>
          <RichTextEditor initialContent="Start writing..." />
        </Suspense>
      )}
    </div>
  );
}

export default App;

// Vite / webpack bundle output (approximate):
// index.js          → 42 kB   (main app, always loaded)
// AnalyticsDashboard.js → 180 kB  (chart library + dashboard, loaded on demand)
// RichTextEditor.js → 95 kB   (editor, loaded on demand)
▶ Output
// Browser Network tab when user clicks 'Analytics' for the first time:
// GET /assets/AnalyticsDashboard-Bx3kP2qR.js 180 kB (downloaded once, cached after)
// Skeleton renders immediately, then dashboard appears when chunk is ready

// Subsequent clicks on 'Analytics':
// No network request — chunk is already in browser cache
// Dashboard renders immediately (no Suspense fallback shown)
⚠️
Pro Tip:Use prefetching to eliminate the loading delay for predictable navigation. Add a mouseenter handler on nav links that calls import('./pages/AnalyticsDashboard') — this preloads the chunk while the user is still moving their cursor. By the time they click, the chunk is already cached and Suspense resolves instantly. Vite also supports /* @vite-prefetch */ and /* webpackPrefetch: true */ magic comments for automatic prefetching.

Virtualisation for long lists — only render what the user can see

No amount of memoisation saves you if you're trying to render 10,000 DOM nodes at once. The browser has to create, style, and layout each one. A list with 5,000 items might take 2–3 seconds just to mount — and that's before any interaction.

Virtualisation (also called windowing) solves this by only rendering the DOM nodes currently visible in the viewport, plus a small overscan buffer above and below. As the user scrolls, nodes are recycled — elements that scroll off the top are repositioned and reused for content coming in from the bottom. The DOM stays small (typically 20–50 nodes) regardless of how many items are in the data set.

@tanstack/react-virtual is the modern, framework-agnostic choice. It's headless — it gives you the calculations, you control the markup. react-window and react-virtualized are older but still widely used in production and worth knowing for legacy codebases.

Critical consideration: virtualisation breaks native browser 'find in page' (Ctrl+F) because non-rendered items aren't in the DOM. For accessibility and search-critical content, consider server-side pagination instead. Also, fixed-height rows are significantly simpler to implement than variable-height rows — @tanstack/react-virtual supports both, but dynamic measurement has its own complexity.

VirtualisedProductList.jsx · JAVASCRIPT
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
import React, { useRef, useMemo } from 'react';
import { useVirtualizer } from '@tanstack/react-virtual';

// Generate a large dataset to demonstrate the problem clearly
function generateProductCatalogue(count) {
  return Array.from({ length: count }, (_, index) => ({
    id: index + 1,
    name: `Product ${index + 1}`,
    price: parseFloat((Math.random() * 500 + 10).toFixed(2)),
    category: ['electronics', 'clothing', 'books', 'tools'][index % 4],
    inStock: index % 7 !== 0, // every 7th item is out of stock
  }));
}

function ProductRow({ product, style }) {
  return (
    // The style prop from the virtualiser sets position:absolute, top, height
    // — this is how items are positioned within the scroll container
    <div
      style={{
        ...style,
        display: 'flex',
        alignItems: 'center',
        padding: '0 16px',
        borderBottom: '1px solid #e5e7eb',
        background: product.inStock ? '#fff' : '#fef2f2',
      }}
    >
      <span style={{ flex: 1, fontWeight: 600 }}>{product.name}</span>
      <span style={{ width: 100, color: '#6b7280' }}>{product.category}</span>
      <span style={{ width: 80, textAlign: 'right' }}>${product.price}</span>
      <span style={{ width: 80, textAlign: 'right', color: product.inStock ? '#16a34a' : '#dc2626' }}>
        {product.inStock ? 'In stock' : 'Sold out'}
      </span>
    </div>
  );
}

function VirtualisedProductList() {
  // 10,000 items — this would destroy performance without virtualisation
  const allProducts = useMemo(() => generateProductCatalogue(10_000), []);

  // The scroll container ref — the virtualiser needs to measure scroll position
  const scrollContainerRef = useRef(null);

  const virtualiser = useVirtualizer({
    count: allProducts.length,      // total number of items
    getScrollElement: () => scrollContainerRef.current, // what element scrolls
    estimateSize: () => 56,         // estimated row height in px (exact = better performance)
    overscan: 5,                    // render 5 extra rows above and below viewport as buffer
  });

  // virtualiser.getTotalSize() returns the full scrollable height as if all items were rendered
  // This makes the scrollbar feel correct even though most items don't exist in the DOM
  const totalScrollHeight = virtualiser.getTotalSize();

  // virtualiser.getVirtualItems() returns only the items currently in the render window
  const visibleItems = virtualiser.getVirtualItems();

  console.log(`Rendering ${visibleItems.length} of ${allProducts.length} items in DOM`);

  return (
    <div>
      <h2>Product Catalogue ({allProducts.length.toLocaleString()} items)</h2>

      {/* Fixed-height scroll container — must have overflow:auto and a defined height */}
      <div
        ref={scrollContainerRef}
        style={{ height: 600, overflow: 'auto', border: '1px solid #e5e7eb', borderRadius: 8 }}
      >
        {/* Inner div is sized to the FULL list height so the scrollbar is accurate */}
        <div style={{ height: totalScrollHeight, position: 'relative' }}>
          {visibleItems.map(virtualItem => {
            const product = allProducts[virtualItem.index];
            return (
              <ProductRow
                key={product.id}
                product={product}
                style={{
                  position: 'absolute',
                  top: 0,
                  left: 0,
                  width: '100%',
                  // transform is used instead of top offset for GPU-composited scrolling
                  transform: `translateY(${virtualItem.start}px)`,
                  height: `${virtualItem.size}px`,
                }}
              />
            );
          })}
        </div>
      </div>
    </div>
  );
}

export default VirtualisedProductList;
▶ Output
// In the browser console as page loads:
Rendering 16 of 10,000 items in DOM

// After scrolling halfway down:
Rendering 16 of 10,000 items in DOM

// DOM inspection in DevTools shows only ~16 ProductRow divs exist at any time
// Page mount time: ~12ms (vs ~2400ms without virtualisation)
// Memory usage: ~8 MB (vs ~180 MB without virtualisation)
⚠️
Watch Out:Never virtualise a list of fewer than ~100 items. The position:absolute layout removes items from normal document flow, which breaks things like CSS grid siblings, sticky headers within the list, and native Ctrl+F search. The overhead of setting up a virtualiser on a 50-item list is pure cost with no measurable benefit.
TechniqueWhat it preventsWhen to useHidden cost
React.memoUnnecessary re-renders when props haven't changedMemoised children receiving stable props from a frequently-re-rendering parentShallow props comparison on every render — wasted if props change often
useMemoExpensive recalculations and unstable object/array referencesComputationally heavy derivations, or objects/arrays passed as props to memoised childrenMemory allocation for the cache, dependency array comparison, cognitive overhead
useCallbackNew function references causing memoised children to re-renderCallbacks passed as props to React.memo'd components or as dependencies in other hooksSame memory overhead as useMemo — pointless without React.memo on the receiving component
React.lazy + SuspenseOversized initial JS bundle delaying Time to InteractiveRoute-level splits, heavy optional features (charts, editors, PDF viewers)Extra network request on first use — mitigate with prefetching
VirtualisationRendering thousands of DOM nodes at once causing slow mount and scrollLists with 100+ dynamic items where pagination isn't viableBreaks Ctrl+F search, complicates variable-height rows, removes items from document flow

🎯 Key Takeaways

  • React re-renders children by default whenever a parent re-renders — React.memo opts a component out of this by caching its output and only re-rendering when props change via shallow equality (===).
  • useCallback and useMemo are reference-stabilisation tools first, and computation-savers second — their most important job is preventing new object/function references from invalidating React.memo.
  • Code splitting with React.lazy + dynamic import() reduces initial bundle size at the route or feature level — combine it with mouseenter prefetching to eliminate visible loading delays on predictable navigation.
  • Virtualisation with @tanstack/react-virtual keeps the DOM node count constant regardless of dataset size — mount time and memory usage stay flat whether your list has 100 or 100,000 items.

⚠ Common Mistakes to Avoid

  • Mistake 1: Defining objects or arrays inline inside JSX props on a memoised component — e.g. creates a new object reference every render, so React.memo's shallow comparison always returns false and the component re-renders every time. Fix: move static objects outside the component, or wrap dynamic ones in useMemo.
  • Mistake 2: Using useCallback with a stale closure by omitting state variables from the dependency array — the callback captures the initial state value and never sees updates, causing subtle bugs where your handler always operates on stale data. Fix: either include all referenced state in the dependency array, or use the functional updater form (setState(prev => prev + 1)) which doesn't need to close over the current value.
  • Mistake 3: Wrapping everything in useMemo and useCallback 'just in case' — this is cargo-cult optimisation. Memoisation hooks are not free: they allocate cache entries, run dependency comparisons every render, and make code significantly harder to read and refactor. The React team itself says premature memoisation is an antipattern. Fix: profile with React DevTools Profiler first, identify components with high render times or render counts, then apply memoisation surgically where the measurement shows a real gain.

Interview Questions on This Topic

  • QIf React.memo does a shallow comparison of props, how would you handle a component that receives a deeply nested object as a prop where only a nested field changes — and why is the standard advice 'flatten your state' rather than 'use a custom comparator'?
  • QExplain the relationship between useCallback and React.memo. Can React.memo ever skip a re-render if you're not using useCallback for the callbacks you pass to it? Walk me through why.
  • QA colleague says useMemo is always a good idea because 'it can only help, never hurt'. What would you tell them — and what specific scenario would you use to prove them wrong?

Frequently Asked Questions

Does React.memo do a deep comparison of props?

No — React.memo performs a shallow (===) comparison by default. For primitive props like strings and numbers this is effectively a value comparison. For objects and arrays it compares references, so a new object literal with identical contents still counts as 'changed'. You can pass a custom comparison function as the second argument to React.memo for deep comparison, but this is rarely the right solution — flattening props or stabilising references with useMemo is almost always cleaner.

When should I use useCallback vs useMemo?

Use useCallback when you need a stable function reference — typically a callback you're passing as a prop to a React.memo'd child or using as a dependency in another hook. Use useMemo when you need a stable non-function value — an expensive calculation result, a derived object, or an array that shouldn't change reference between renders. Internally they're the same mechanism; useCallback(fn, deps) is literally equivalent to useMemo(() => fn, deps).

Does splitting code with React.lazy hurt SEO?

It can if you're relying on client-side rendering. A lazy-loaded component that hasn't rendered yet has no HTML for search engine crawlers to index. The solution is server-side rendering (Next.js handles this transparently — lazy loaded components are still rendered on the server). For pure CSR apps, avoid lazy loading content that needs to be indexed, and use it primarily for behind-authentication pages and heavy interactive features.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousReact Context APINext →Redux with React
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged