Memoisation in JavaScript: Caching, Closures and Performance
Every senior JavaScript developer has hit the wall where a perfectly correct function becomes a production liability — not because the logic is wrong, but because it's being asked the same question thousands of times per second and recomputing the answer from scratch every single time. In data-heavy UIs, recursive algorithms, and real-time search filtering, this kind of redundant computation quietly kills performance while your profiler screams at you.
Memoisation solves this by turning a pure function into a self-learning one. The first call does the real work. Every subsequent call with identical arguments short-circuits straight to a cached result. It's not magic — it's a deliberate trade-off: you spend memory to buy speed. Understanding exactly when that trade-off pays off, and when it absolutely doesn't, separates developers who reach for memoisation reflexively from those who use it surgically.
By the end of this article you'll be able to write a production-grade memoisation utility from scratch, understand the closure and Map internals that make it tick, handle non-primitive arguments correctly, recognise the subtle bugs that bite even experienced engineers, and explain the whole thing confidently in an interview setting.
How Memoisation Works Under the Hood — Closures and the Cache
Memoisation relies on two JavaScript fundamentals working in tandem: closures and a key-value store (traditionally an object, better as a Map).
When you call a memoisation wrapper around your function, that wrapper creates a cache object in its own scope and returns a new function. That returned function closes over the cache — meaning every future call has access to the same cache, even though the outer wrapper has long since finished executing. This is the closure doing its job.
On each invocation, the inner function serialises its arguments into a cache key, checks whether that key already exists, and either returns the stored result immediately or calls the original function, stores the result, then returns it.
The critical insight is that the cache persists for the lifetime of the memoised function reference. If you create a new memoised function, you get a fresh cache. If you keep a reference to the same memoised function, the cache accumulates results across every call site that uses it.
Using a plain object as the cache is fine for string and number arguments, but it silently converts all keys to strings — meaning the integer 1 and the string '1' collide. A Map avoids this because it uses strict equality for key lookup, which is why production implementations prefer it.
// A foundational memoisation utility using a Map for type-safe key storage. // This version handles single-argument functions to keep the mechanics visible. function memorise(expensiveFunction) { // The cache lives inside this closure — it persists across all future calls // to the returned memoisedFunction, but is invisible to the outside world. const resultCache = new Map(); return function memoisedFunction(argument) { // Check whether we've already computed a result for this exact argument. if (resultCache.has(argument)) { console.log(`[CACHE HIT] argument=${argument}`); return resultCache.get(argument); // Return the stored result immediately } console.log(`[CACHE MISS] argument=${argument} — computing now...`); // We haven't seen this argument before, so run the real function. const computedResult = expensiveFunction(argument); // Store the result so the next identical call skips this work entirely. resultCache.set(argument, computedResult); return computedResult; }; } // Simulates an expensive calculation — in reality this could be a complex // mathematical transform, a tree traversal, or a regex-heavy string parse. function computeSquare(number) { return number * number; } const memoisedSquare = memorise(computeSquare); console.log(memoisedSquare(6)); // First call — must compute console.log(memoisedSquare(6)); // Second call — served from cache console.log(memoisedSquare(9)); // New argument — must compute console.log(memoisedSquare(6)); // Back to 6 — still in cache console.log(memoisedSquare(9)); // 9 is now cached too
36
[CACHE HIT] argument=6
36
[CACHE MISS] argument=9 — computing now...
81
[CACHE HIT] argument=6
36
[CACHE HIT] argument=9
81
Handling Multiple Arguments — The Serialisation Problem
Real functions rarely take a single argument. The moment you add a second parameter, memoisation has a key-generation problem: how do you turn an arbitrary list of arguments into a single, unambiguous cache key?
The naive solution is JSON.stringify(arguments) or joining args with a delimiter. Both have traps. JSON.stringify produces identical output for [1, 11] and [11, 1] if you're not careful — well, actually those serialise differently, but it silently drops functions, undefined values, and circular references without throwing, meaning two logically different argument sets can produce the same key.
The delimiter approach — joining with a pipe character | — breaks when an argument itself contains that delimiter: fn('a|b', 'c') and fn('a', 'b|c') both produce 'a|b|c'.
The most robust production approach is using a nested Map tree (a trie structure), where each argument level corresponds to one Map. This avoids serialisation entirely, uses strict equality, and handles any argument type correctly including objects — as long as you're passing the same object reference each time.
For the common case of JSON-serialisable primitives, a carefully chosen separator that cannot appear in the data (like \0 — the null character) gives you a simple and highly performant key with virtually no collision risk.
// Production-grade memoisation for functions with multiple arguments. // Uses a null-byte separator for primitives and warns on object arguments. function memorise(targetFunction) { const resultCache = new Map(); return function memoisedFunction(...argumentList) { // Build a cache key from all arguments. // The null byte (\0) is used as a separator because it cannot appear // naturally in typical string arguments, minimising collision risk. const cacheKey = argumentList .map(arg => { if (arg !== null && typeof arg === 'object') { // Objects are serialised — note this won't handle circular refs. // For object-heavy use cases, prefer the trie approach instead. return JSON.stringify(arg); } return String(arg); }) .join('\0'); if (resultCache.has(cacheKey)) { return resultCache.get(cacheKey); } const result = targetFunction.apply(this, argumentList); resultCache.set(cacheKey, result); return result; }; } // A function that blends two paint colours with a mixing ratio. // Imagine this hits a real colour-science algorithm in production. function blendColours(colourA, colourB, ratio) { // Simplified stand-in for a complex colour blend computation return `blend(${colourA}, ${colourB}, ratio=${ratio})`; } const memoisedBlend = memorise(blendColours); // First calls — all cache misses, real work happens console.log(memoisedBlend('red', 'blue', 0.5)); console.log(memoisedBlend('red', 'blue', 0.7)); console.log(memoisedBlend('red', 'blue', 0.5)); // Cache hit — same three args // Demonstrates why argument ORDER matters for the key console.log(memoisedBlend('blue', 'red', 0.5)); // Miss — different order // Demonstrates the null-byte separator preventing collisions: // Without it, ('a|b', 'c') and ('a', 'b|c') would collide. const memoisedConcat = memorise((a, b) => `${a}+${b}`); console.log(memoisedConcat('a|b', 'c')); // key: 'a|b\0c' console.log(memoisedConcat('a', 'b|c')); // key: 'a\0b|c' — different, correct!
blend(red, blue, ratio=0.7)
blend(red, blue, ratio=0.5)
blend(blue, red, ratio=0.5)
a|b+c
a+b|c
Recursive Memoisation — Fibonacci and the Stack Trap
Fibonacci is the canonical memoisation example for good reason: its naive recursive form has O(2ⁿ) time complexity because it recomputes the same sub-problems exponentially. With memoisation it drops to O(n). But there's a subtlety most tutorials skip entirely.
If you wrap a recursive function with a memoiser and the function calls itself by its original name internally, the recursive calls bypass the memoised wrapper entirely. The outer call hits the cache correctly, but every internal recursive call goes straight to the unwrapped function — you get no caching benefit on sub-problems.
The fix is to make the function reference itself through the memoised version, not the original. You can achieve this by either: (a) reassigning the function variable to its memoised version before it calls itself, or (b) passing the memoised function into itself as a parameter using a Y-combinator-style approach.
For production-scale Fibonacci or similar dynamic programming problems in JavaScript, an iterative bottom-up approach with a plain array beats memoised recursion on both speed and call-stack safety. Memoised recursion is still liable to hit the engine's call stack limit for inputs above ~10,000 depending on the runtime. Memoisation is most valuable when the recursion tree is deep but narrow — where stack depth stays manageable but repeated sub-problems are abundant.
// Demonstrates the self-reference trap in recursive memoisation // and shows both the broken and the correct pattern side by side. // --- BROKEN PATTERN --- // The recursive calls inside go to the original, un-memoised function. function naiveFibonacci(n) { if (n <= 1) return n; return naiveFibonacci(n - 1) + naiveFibonacci(n - 2); // calls original! } function memorise(fn) { const cache = new Map(); return function memoised(...args) { const key = args.join('\0'); if (cache.has(key)) return cache.get(key); const result = fn.apply(this, args); cache.set(key, result); return result; }; } // The outer call IS memoised, but sub-calls go to naiveFibonacci — no benefit. const brokenMemoisedFib = memorise(naiveFibonacci); // --- CORRECT PATTERN --- // We declare the variable first, then assign it — so the function body // can reference the memoised version of itself through the variable. let fibonacci; fibonacci = memorise(function(n) { if (n <= 1) return n; // 'fibonacci' here refers to the memoised wrapper, not the raw function. // This means every sub-problem call gets cached correctly. return fibonacci(n - 1) + fibonacci(n - 2); }); // Track how many real computations happen to prove memoisation is working let computationCount = 0; fibonacci = memorise(function(n) { computationCount++; if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); }); const result = fibonacci(10); console.log(`fibonacci(10) = ${result}`); console.log(`Total computations performed: ${computationCount}`); // Without memoisation this would compute 177 times for n=10. // With correct recursive memoisation it computes exactly 11 times (0 through 10). // Second call — should need ZERO new computations computationCount = 0; console.log(`fibonacci(10) again = ${fibonacci(10)}`); console.log(`Computations on second call: ${computationCount}`);
Total computations performed: 11
fibonacci(10) again = 55
Computations on second call: 0
Cache Invalidation, Memory Leaks, and Production Patterns
Memoisation without boundaries is a memory leak waiting to happen. A cache that grows unbounded will quietly consume heap space until Node.js OOMs or the browser tab crashes. In production you need one of two strategies: a TTL (time-to-live) policy that expires stale entries, or a capacity limit using an LRU (Least Recently Used) eviction policy.
An LRU cache evicts the entry that was accessed least recently when capacity is reached. This is the right choice when you have a large input space but a hot subset — your cache stays small and covers the calls that actually matter.
Beyond memory, there's a correctness issue: memoisation is only safe for pure functions — those whose output depends solely on their inputs and which have no side effects. Memoising a function that reads from a database, calls Date.now(), or modifies external state will silently serve stale or wrong results. This is the single most dangerous production misuse of memoisation.
For React developers: useMemo and useCallback are component-scoped memoisation hooks. They do NOT persist across renders beyond the component's lifetime, and their cache size is always 1 — they only remember the most recent call. They solve a different problem (referential stability) more than raw computation speed. Don't conflate them with a general memoisation utility.
// A memoisation utility with LRU eviction — safe for production use // where the input space is large or unbounded. function memoiseLRU(targetFunction, maxCacheSize = 100) { // We use a Map for the cache because Map preserves insertion order, // which makes implementing LRU eviction straightforward. const lruCache = new Map(); return function memoisedWithLRU(...argumentList) { const cacheKey = argumentList.map(String).join('\0'); if (lruCache.has(cacheKey)) { // LRU trick: delete and re-insert to move this entry to the end // (most recently used position) of the Map's iteration order. const cachedValue = lruCache.get(cacheKey); lruCache.delete(cacheKey); lruCache.set(cacheKey, cachedValue); return cachedValue; } const freshResult = targetFunction.apply(this, argumentList); // If we're at capacity, evict the least recently used entry. // Map.keys().next().value gives us the first (oldest) key in iteration order. if (lruCache.size >= maxCacheSize) { const oldestKey = lruCache.keys().next().value; lruCache.delete(oldestKey); console.log(`[LRU EVICT] Removed entry for key: "${oldestKey}"`); } lruCache.set(cacheKey, freshResult); return freshResult; }; } // Simulate an expensive prime-check — in reality this could be a // complex data transformation or a third-party library call. function isPrime(num) { if (num < 2) return false; for (let divisor = 2; divisor <= Math.sqrt(num); divisor++) { if (num % divisor === 0) return false; } return true; } // Tiny cache of 3 to demonstrate eviction clearly const memoisedIsPrime = memoiseLRU(isPrime, 3); console.log(memoisedIsPrime(7)); // Miss — cache: [7] console.log(memoisedIsPrime(11)); // Miss — cache: [7, 11] console.log(memoisedIsPrime(13)); // Miss — cache: [7, 11, 13] console.log(memoisedIsPrime(7)); // Hit — moves 7 to end: [11, 13, 7] console.log(memoisedIsPrime(17)); // Miss, cache full — evicts 11: [13, 7, 17] console.log(memoisedIsPrime(13)); // Hit — still in cache
true
true
true
[LRU EVICT] Removed entry for key: "11"
true
true
| Aspect | Memoisation (top-down) | Tabulation (bottom-up) |
|---|---|---|
| Approach | Recursive with a result cache | Iterative, fills table from base case up |
| Sub-problems computed | Only the ones actually needed | All sub-problems, even unused ones |
| Call stack risk | Yes — deep recursion can overflow | None — fully iterative |
| Code readability | Mirrors the mathematical definition closely | More explicit, less immediately intuitive |
| Memory usage | Cache grows with unique inputs seen | Fixed table size known upfront |
| Best for | Sparse problem spaces (not all sub-problems needed) | Dense problem spaces (most sub-problems needed) |
| Debugging | Harder — recursive call chains obscure flow | Easier — step through the table directly |
| React equivalent | useMemo / useCallback (scope: single render) | No direct equivalent — manual state management |
🎯 Key Takeaways
- Memoisation is a closure-powered cache: the cache object lives inside the closure of the wrapper function and persists for the entire lifetime of that memoised function reference — not just for one call.
- Use Map, not a plain object, for your cache. Plain objects coerce keys to strings, causing silent collisions between the integer 1 and the string '1'. Map uses strict equality and avoids this entirely.
- Memoising a recursive function by wrapping it externally only caches the outermost call. For sub-problems to be cached, the function must call itself through the memoised reference — reassign the variable to its memoised form before the function body executes.
- In production, always put a bound on your cache. An unbounded cache is a memory leak. Use an LRU eviction strategy (Map-based, delete-and-reinsert for recency tracking) when the input space is large or when you can't predict the number of unique arguments.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Memoising a function that closes over external mutable state — If your function reads a variable from outer scope (e.g. a config object or a counter) the cache stores the result from the first call's context. Future calls return that stale cached result even when the external state has changed. The symptom is correct output the first time, then inexplicably wrong output afterwards. Fix: ensure the function is truly pure — every input that affects the output must be an explicit argument.
- ✕Mistake 2: Using a plain object as the cache and passing numeric keys — Plain objects coerce all keys to strings, so
cache[1]andcache['1']are the same slot. A function that should return different results for integer1and string'1'will silently return the first-computed result for both. The symptom is hard-to-spot incorrect return values for arguments that look different but share a string coercion. Fix: always use a Map as your cache. - ✕Mistake 3: Memoising a method on a class instance without binding
thiscorrectly — When you wrap an instance method with a memoiser, the inner function may lose itsthiscontext, causing 'Cannot read property of undefined' errors or the method operating on the wrong object. Fix: either use.apply(this, args)inside the memoiser (as shown in the examples above) or bind the method before wrapping it:this.computeScore = memorise(this.computeScore.bind(this)).
Interview Questions on This Topic
- QCan you implement a memoisation function from scratch in JavaScript, and explain what data structure you'd use for the cache and why — specifically why Map is preferable to a plain object?
- QMemoisation is sometimes described as a trade-off. What exactly are you trading, what are the conditions under which that trade-off is worthwhile, and what type of function is unsafe to memoise?
- QIf you memoise a recursive function by wrapping it externally — like `const memFib = memoise(fibonacci)` — does the caching apply to the internal recursive calls? Why or why not, and how would you fix it if not?
Frequently Asked Questions
What is the difference between memoisation and caching in JavaScript?
Caching is the general concept of storing computed results for reuse. Memoisation is a specific form of caching that is tied directly to a function — it caches the return value of a pure function keyed by its input arguments, and the cache is scoped to that function's closure. General caching might involve HTTP responses, database queries, or arbitrary data stores with manual invalidation logic.
Is JavaScript's useMemo the same as writing a memoisation function?
Not quite. React's useMemo is component-scoped and has a cache size of exactly one: it only remembers the result from the most recent render, and only reuses it if the dependency array hasn't changed. A general memoisation utility grows its cache across every unique set of arguments it has ever seen. useMemo is optimised for referential stability between renders, not for avoiding repeated expensive computations across different inputs.
Does memoisation always make a function faster?
No. Memoisation adds overhead: it must compute the cache key, perform a Map lookup, and potentially serialise arguments on every single call — even cache misses. For functions that are already very fast (simple arithmetic, single property access), this overhead can actually make them slower than the un-memoised version. Memoisation pays off when the function being wrapped is significantly more expensive than the cache lookup, and when the same arguments are genuinely repeated across calls.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.