Skip to content
Home DSA LRU Cache Implementation Using a Doubly Linked List and HashMap

LRU Cache Implementation Using a Doubly Linked List and HashMap

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Linked List → Topic 9 of 10
LRU Cache implementation explained deeply — how the doubly linked list + HashMap combo works, O(1) operations, edge cases, and production gotchas in Java.
🔥 Advanced — solid DSA foundation required
In this tutorial, you'll learn
LRU Cache implementation explained deeply — how the doubly linked list + HashMap combo works, O(1) operations, edge cases, and production gotchas in Java.
  • LRU Cache provides O(1) get and put by combining a hash map (lookup) with a DLL (order maintenance).
  • DLL head = most recently used (MRU), tail = least recently used (LRU).
  • On get: move accessed node to head. On put: insert at head, evict tail if over capacity.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Core data structures: HashMap for O(1) key lookup + Doubly Linked List for O(1) order maintenance.
  • DLL head = most recently used (MRU). DLL tail = least recently used (LRU).
  • On get: move accessed node to head. On put: insert at head, evict tail.prev if over capacity.
  • Sentinel head/tail nodes eliminate all null-check edge cases in insert/remove.
  • Production risk: forgetting to update both the map and the DLL during eviction causes memory leaks or stale references.
  • Real-world: CPU caches, Redis, CDN edge nodes, database buffer pools all use LRU eviction.
🚨 START HERE
LRU Cache Triage Cheat Sheet
Fast commands to diagnose common production issues with LRU cache implementations.
🟡Memory grows unboundedly — eviction appears broken.
Immediate ActionHeap dump to compare DLL node count vs HashMap entry count.
Commands
jcmd <pid> GC.heap_dump /tmp/heap.hprof
jmap -histo <pid> | grep Node (count Node instances)
Fix NowIf DLL node count >> HashMap size, eviction is not removing from DLL. Check the eviction code path for a missing _remove() call.
🟡NullPointerException in cache get/put — sporadic crashes.
Immediate ActionThread dump to confirm stack trace location.
Commands
jcmd <pid> Thread.print
grep -r 'sentinel\|dummy' src/ (verify sentinel initialization)
Fix NowEnsure sentinel head and tail are initialized in the constructor and never set to null.
🟡Cache hit rate drops to near zero after deployment.
Immediate ActionAdd metrics: cache.hits, cache.misses, cache.evictions. Compare pre/post deployment.
Commands
Check monitoring dashboard for hit_rate metric
jcmd <pid> GC.class_histogram (check for unexpected object growth)
Fix NowVerify get() calls _remove() then _insert_front() in that order. Missing either breaks the promotion-to-MRU logic.
Production IncidentThe Session Cache That Leaked 4GB of MemoryA web application's session cache used an LRU cache to store user sessions. After a deployment, the cache grew unboundedly, consuming 4GB of heap and triggering OOM kills every 6 hours. The eviction path was broken — the DLL was never trimmed.
SymptomHeap usage grows linearly after deployment. OOM kills every 6 hours. Thread dump shows cache put operations succeeding but no evictions. Heap dump shows millions of SessionNode objects unreachable from the HashMap but still referenced by the DLL.
AssumptionThe team initially blamed a memory leak in the session serialization layer, because the heap dump showed millions of SessionNode objects. The root cause was misattributed to a third-party library.
Root causeA recent refactoring changed the eviction logic. The original code removed the LRU node from both the DLL and the HashMap during eviction. The refactored code only removed the node from the HashMap (del self.map[lru.key]) but forgot to call _remove(lru) on the DLL. The DLL grew unboundedly, keeping every evicted node alive through the DLL's next/prev pointers. The HashMap was correct, but the DLL leaked memory.
Fix1. Restored the _remove(lru) call before del self.map[lru.key] in the eviction path. 2. Added an invariant check: assert len(self.map) == self._dll_size() in debug builds. 3. Added a metric: cache.dll.size vs cache.map.size — alert if they diverge. 4. Added integration tests that verify DLL and HashMap sizes match after 100K put operations with capacity=100.
Key Lesson
Every eviction must update BOTH the HashMap and the DLL. Missing either causes a memory leak or stale references.After any refactoring of cache eviction logic, verify that the DLL and HashMap remain in sync by comparing their sizes.Add invariant assertions in debug builds: DLL Heap dumps showing objects unreachable from the HashMap but alive in the DLL are a strong signal of a broken eviction path.
Production Debug GuideSymptom-driven investigation paths for cache-related incidents.
Memory grows unboundedly despite fixed cache capacity.Check if eviction removes nodes from BOTH the HashMap and the DLL. A missing DLL removal causes the DLL to grow while the HashMap stays correct.
Cache hit rate drops after a deployment.Verify that get() moves the accessed node to the head (MRU position). If the move-to-head logic is broken, recently accessed items are evicted prematurely.
NullPointerException during get or put.Check if sentinel head/tail nodes are properly initialized. Missing sentinel initialization causes null dereference on the first insert or remove.
Duplicate keys in the DLL but not in the HashMap.Check if put() for an existing key removes the old node from the DLL before inserting the new one. Missing the remove step creates duplicate DLL entries.
Cache returns stale values after put with existing key.Verify that put() with an existing key updates the node's value AND moves it to the head. Updating the value without moving to head preserves stale ordering.

LRU (Least Recently Used) eviction is the industry-standard cache replacement policy. It works because most workloads exhibit temporal locality — data accessed recently is more likely to be accessed again soon. CPU L1/L2 caches, Redis, database buffer pools, and CDN edge nodes all rely on LRU.

The challenge: both get and put must run in O(1) time. A plain array requires O(n) to find and move elements. A single linked list requires O(n) to find a node by key. The solution pairs a HashMap (O(1) lookup by key) with a doubly linked list (O(1) insertion/deletion at any known node). Neither structure alone suffices.

A common misconception is that LinkedHashMap alone implements LRU. It can, but only with accessOrder=true and careful override of removeEldestEntry. A from-scratch implementation demonstrates understanding of the pointer manipulation that LinkedHashMap hides internally.

What is LRU Cache? — Plain English

An LRU (Least Recently Used) Cache evicts the least recently accessed item when the cache is full and a new item must be inserted. It is a fixed-capacity key-value store where both get and put run in O(1).

Real-world use: CPU caches, DNS resolver caches, web browser caches, database buffer pools. When storage is limited, you want to keep the items most likely to be accessed again — which are usually the recently used ones.

Implementation: combine a hash map (for O(1) key lookup) with a doubly linked list (for O(1) order maintenance). The DLL head = most recently used, DLL tail = least recently used.

Mental Model
The Desk Analogy
Why LRU matches real-world access patterns
  • Your desk holds 5 folders. You work on folder A, then B, then C.
  • When folder D arrives and the desk is full, you evict the oldest untouched folder.
  • The folders you touched recently are the ones you will likely need again.
  • This temporal locality is why LRU outperforms random eviction in most workloads.
📊 Production Insight
LRU is not always the best eviction policy. For workloads with scan patterns (e.g., full-table scans in databases), LRU thrashes — the scan evicts all hot data. In these cases, variants like LRU-K (tracks last K accesses) or ARC (Adaptive Replacement Cache) perform better. Know your access pattern before choosing LRU.
🎯 Key Takeaway
LRU evicts the least recently accessed item. It works because most workloads exhibit temporal locality. The O(1) requirement forces a dual-structure design: HashMap + Doubly Linked List.

How LRU Cache Works — Step by Step

Data structures
  • hash map: key -> node in DLL
  • DLL: maintains access order (head=MRU, tail=LRU)
  • Use sentinel head and tail nodes to simplify edge cases

get(key) O(1): 1. If key not in map: return -1. 2. Move the node to the front of the DLL (most recently used). 3. Return the node's value.

put(key, value) O(1): 1. If key already in map: update value, move node to front. 2. Else: create new node, insert at front, add to map. 3. If cache exceeds capacity: remove the LRU node (tail.prev), delete from map.

🔥Why Two Structures?
  • HashMap alone: O(1) lookup but no ordering — cannot find the LRU item in O(1).
  • DLL alone: O(1) insert/remove at known positions but O(n) to find a node by key.
  • Combined: HashMap finds the node in O(1). DLL maintains eviction order in O(1).
  • The HashMap value is a pointer to the DLL node — this is the bridge between the two structures.
📊 Production Insight
The eviction path is the most failure-prone operation. It must atomically update both the HashMap and the DLL. If the HashMap removal succeeds but the DLL removal fails (or is skipped), the evicted node remains reachable through the DLL, causing a memory leak. Always remove from the DLL first, then from the HashMap.
🎯 Key Takeaway
The two-structure design is non-negotiable for O(1) operations. The HashMap bridges key lookup to DLL node pointers. The DLL maintains eviction order. The eviction path must update both structures atomically.
LRU Operation Decision Flow
Ifget(key): key exists in HashMap
UseRemove node from DLL, insert at head (MRU). Return value.
Ifget(key): key not in HashMap
UseReturn -1 (cache miss). No state change.
Ifput(key, value): key exists
UseRemove old node from DLL. Update value. Insert at head. No eviction.
Ifput(key, value): key is new, cache not full
UseCreate node, insert at head, add to HashMap. No eviction.
Ifput(key, value): key is new, cache is full
UseEvict tail.prev (LRU): remove from DLL and HashMap. Then insert new node at head.

Worked Example — Tracing the Algorithm

LRU Cache capacity=3. Operations: put(1,A), put(2,B), put(3,C), get(1), put(4,D), get(2).

put(1,A): cache=[1:A]. DLL: head<->1<->tail. put(2,B): cache=[1:A, 2:B]. DLL: head<->2<->1<->tail. (2 is MRU) put(3,C): cache=[1:A, 2:B, 3:C]. DLL: head<->3<->2<->1<->tail. get(1): 1 exists. Move 1 to front. DLL: head<->1<->3<->2<->tail. Return A. put(4,D): cache full (size=3). Insert 4, evict LRU=tail.prev=2. Remove 2 from map and DLL. Insert 4 at front. cache=[1:A, 3:C, 4:D]. DLL: head<->4<->1<->3<->tail. get(2): 2 not in cache. Return -1. (Was evicted.)

Key insight: get(1) promoted key 1 to MRU, saving it from eviction when key 4 was inserted.

💡Pro Tip: Trace With Concrete Values
When debugging LRU cache issues, draw the DLL state after each operation. Label each node with its key and mark the head (MRU) and tail.prev (LRU). This visual trace immediately reveals ordering bugs that are invisible in code.
📊 Production Insight
The worked example demonstrates the promotion effect: get(1) moved key 1 to MRU, which saved it from eviction. Without this promotion, key 1 would have been evicted instead of key 2. This is the core behavior that distinguishes LRU from FIFO — access recency, not insertion order, determines eviction.
🎯 Key Takeaway
Tracing with concrete values reveals the promotion mechanic that distinguishes LRU from FIFO. The get operation is not just a lookup — it is a state mutation that changes eviction priority.

Implementation

Sentinel head and tail nodes eliminate all null-pointer edge cases. _remove unlinks a node by connecting its neighbours directly. _insert_front places a node right after the sentinel head (MRU position). get moves the accessed node to the front. put inserts at the front and evicts tail.prev when over capacity. The hash map (self.map) gives O(1) node lookup; the DLL gives O(1) order updates.

LRUCache.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
package io.thecodeforge.cache;

import java.util.HashMap;
import java.util.Map;

/**
 * LRU Cache using HashMap + Doubly Linked List.
 * O(1) get and put. Sentinel nodes eliminate null-check edge cases.
 */
public class LRUCache {

    // --- Internal node for the doubly linked list ---
    static class Node {
        int key;
        int value;
        Node prev;
        Node next;

        Node(int key, int value) {
            this.key = key;
            this.value = value;
        }
    }

    private final int capacity;
    private final Map<Integer, Node> map;
    private final Node head; // sentinel — head.next is MRU
    private final Node tail; // sentinel — tail.prev is LRU

    public LRUCache(int capacity) {
        this.capacity = capacity;
        this.map = new HashMap<>();

        // Sentinel nodes: never null, always present
        this.head = new Node(0, 0);
        this.tail = new Node(0, 0);
        head.next = tail;
        tail.prev = head;
    }

    // --- Remove a node from the DLL (O(1)) ---
    private void remove(Node node) {
        node.prev.next = node.next;
        node.next.prev = node.prev;
    }

    // --- Insert a node at the front (MRU position) (O(1)) ---
    private void insertFront(Node node) {
        node.next = head.next;
        node.prev = head;
        head.next.prev = node;
        head.next = node;
    }

    /**
     * Get value by key. Moves accessed node to MRU position.
     * Returns -1 if key not found.
     */
    public int get(int key) {
        if (!map.containsKey(key)) {
            return -1;
        }
        Node node = map.get(key);
        remove(node);          // unlink from current position
        insertFront(node);     // promote to MRU
        return node.value;
    }

    /**
     * Insert or update key-value pair.
     * If cache is full, evicts the LRU item first.
     */
    public void put(int key, int value) {
        if (map.containsKey(key)) {
            // Key exists: update value and promote to MRU
            Node existing = map.get(key);
            existing.value = value;
            remove(existing);
            insertFront(existing);
        } else {
            // New key: create node and insert at MRU
            Node newNode = new Node(key, value);
            map.put(key, newNode);
            insertFront(newNode);

            // Evict if over capacity
            if (map.size() > capacity) {
                Node lru = tail.prev; // least recently used
                remove(lru);          // remove from DLL FIRST
                map.remove(lru.key);  // then remove from map
            }
        }
    }

    // --- Debug: verify DLL and map are in sync ---
    public int dllSize() {
        int count = 0;
        Node current = head.next;
        while (current != tail) {
            count++;
            current = current.next;
        }
        return count;
    }

    public boolean isConsistent() {
        return map.size() == dllSize();
    }

    public static void main(String[] args) {
        LRUCache cache = new LRUCache(3);

        cache.put(1, 100);
        cache.put(2, 200);
        cache.put(3, 300);
        System.out.println("After 3 puts: " + cache.map.keySet());
        System.out.println("DLL size: " + cache.dllSize() + ", Map size: " + cache.map.size());
        System.out.println("Consistent: " + cache.isConsistent());

        System.out.println("get(1): " + cache.get(1)); // promotes 1 to MRU

        cache.put(4, 400); // evicts 2 (LRU)
        System.out.println("After put(4): " + cache.map.keySet());
        System.out.println("get(2): " + cache.get(2)); // -1 (evicted)
        System.out.println("get(1): " + cache.get(1)); // 100 (was promoted)
        System.out.println("Consistent: " + cache.isConsistent());

        // Edge case: capacity 1
        LRUCache tiny = new LRUCache(1);
        tiny.put(1, 10);
        tiny.put(2, 20); // evicts 1
        System.out.println("\nCapacity 1 cache:");
        System.out.println("get(1): " + tiny.get(1)); // -1
        System.out.println("get(2): " + tiny.get(2)); // 20
        System.out.println("Consistent: " + tiny.isConsistent());
    }
}
▶ Output
After 3 puts: [1, 2, 3]
DLL size: 3, Map size: 3
Consistent: true
get(1): 100
After put(4): [1, 3, 4]
get(2): -1
get(1): 100
Consistent: true

Capacity 1 cache:
get(1): -1
get(2): 20
Consistent: true
⚠ Watch Out: Eviction Order Matters
During eviction, remove the node from the DLL BEFORE removing it from the HashMap. If you remove from the HashMap first and an exception occurs during DLL removal, the DLL has a dangling node that is unreachable from the HashMap but still connected in the list — a silent memory leak. DLL-first removal is the safer order.
📊 Production Insight
The isConsistent() method — which verifies DLL size equals HashMap size — should be enabled in debug builds and integration tests. It catches the most common LRU bug: DLL and HashMap falling out of sync due to a missing removal in one of the two structures.
🎯 Key Takeaway
Sentinel nodes eliminate null checks. The eviction order (DLL first, then HashMap) prevents silent memory leaks. The isConsistent() invariant check catches sync bugs between the two structures.
Eviction Path: What Must Happen
IfCache is at capacity and a new key is inserted
UseIdentify LRU node (tail.prev). Remove from DLL. Remove from HashMap. Then insert new node at head.
IfExisting key is updated via put
UseNo eviction. Update value in place. Remove node from DLL. Insert at head (MRU).
IfCapacity is 1
UseEvery put of a new key evicts the current single entry. head.next == tail.prev always.
IfSame key is put twice consecutively
UseSecond put updates the value and promotes to MRU. No eviction. DLL and map sizes unchanged.
🗂 Cache Eviction Policies Compared
Trade-offs for different access patterns and workloads.
PolicyEviction RuleBest ForWeakness
LRU (Least Recently Used)Evict the item not accessed for the longest timeTemporal locality workloads (web caches, session stores)Thrashes on scan patterns — a full scan evicts all hot data
LFU (Least Frequently Used)Evict the item accessed the fewest timesFrequency-based workloads (DNS caching, popular content)Slow to adapt — a previously popular item may lingerorder).
FIFO (First In First Out)Evict the oldest inserted item regardless of accessSimple bounded queues, streaming buffersIgnores access patterns — a frequently accessed item is evicted if it was inserted early
ARC (Adaptive Replacement Cache)Dynamically balances LRU and LFU based on workloadMixed workloads with unpredictable access patternsMore complex to implement — requires two LRU lists and two ghost lists
LRU-KEvict based on the K-th most recent access timeDatabase buffer pools (K=2 is common)Higher per-item metadata overhead than plain LRU

🎯 Key Takeaways

  • LRU Cache provides O(1) get and put by combining a hash map (lookup) with a DLL (order maintenance).
  • DLL head = most recently used (MRU), tail = least recently used (LRU).
  • On get: move accessed node to head. On put: insert at head, evict tail if over capacity.
  • Sentinel head/tail nodes eliminate null checks in insert and remove operations.
  • Python's OrderedDict or Java's LinkedHashMap can also implement LRU in fewer lines.

⚠ Common Mistakes to Avoid

    Forgetting to remove from the DLL during eviction
    Symptom

    HashMap stays at capacity but the DLL grows unboundedly, causing a memory leak —

    Fix

    always call remove(lru) on the DLL before map.remove(lru.key). Both structures must be updated atomically.

    Not using sentinel nodes
    Symptom

    NullPointerException when the cache has 0 or 1 elements, because head.prev or tail.next is null —

    Fix

    initialize sentinel head and tail nodes in the constructor. They are never removed, never null.

    Forgetting to move the node to the front on get
    Symptom

    get() returns the correct value but does not update recency, so frequently accessed items are evicted as if they were cold —

    Fix

    always call remove(node) then insertFront(node) inside get().

    Not handling put() with an existing key correctly
    Symptom

    duplicate entries in the DLL for the same key, causing size mismatch between map and DLL —

    Fix

    if the key exists, remove the old node from the DLL, update the value, then insert at front. Do not create a new node.

    Evicting before inserting on put
    Symptom

    if the new key already exists in the cache, you evict an unrelated item unnecessarily —

    Fix

    check if the key exists first. Only evict when inserting a NEW key that exceeds capacity.

    No invariant check between DLL and HashMap
    Symptom

    silent divergence between the two structures that only manifests as a memory leak weeks later —

    Fix

    add an isConsistent() method that verifies DLL size equals HashMap size. Enable it in debug builds and integration tests.

Interview Questions on This Topic

  • QWhat two data structures power an O(1) LRU cache?
  • QWhy are sentinel head and tail nodes useful in the LRU cache DLL?
  • QHow does the LRU cache handle a put operation when the cache is at capacity?
  • QWalk me through what happens when you call get() on a key that exists — what exact pointer operations occur?
  • QHow would you implement a thread-safe LRU cache? What locking strategy would you use?
  • QHow does LRU differ from LFU, and when would you choose one over the other?
  • QWhat happens if the cache capacity is 1? Trace through a sequence of put(1,A), put(2,B), get(1).
  • QHow would you extend this to support TTL (time-to-live) expiration in addition to LRU eviction?

Frequently Asked Questions

Why use sentinel head and tail nodes?

Sentinel nodes eliminate null checks in insert and remove operations. Without them, every insert/remove must check 'is this node the head?' and 'is this node the tail?' With sentinels, head.next is always the MRU node and tail.prev is always the LRU node — the code is cleaner and less error-prone.

What is the difference between LRU, LFU, and FIFO caches?

LRU (Least Recently Used) evicts the item not accessed for the longest time. LFU (Least Frequently Used) evicts the item accessed the fewest times — useful when access frequency is a better predictor than recency. FIFO (First In First Out) evicts the oldest inserted item regardless of access pattern. LRU is the most common in practice because it adapts well to temporal locality.

Can Python's OrderedDict implement an LRU cache?

Yes. Python's collections.OrderedDict maintains insertion order and supports move_to_end(key) in O(1). Use it as: get moves key to end (MRU), put moves to end and pops the first item (LRU) if over capacity. Python's functools.lru_cache decorator implements this pattern exactly.

How is Java's LinkedHashMap related to LRU cache?

Java's LinkedHashMap can implement LRU by setting accessOrder=true in the constructor and overriding removeEldestEntry(). However, this hides the pointer manipulation internally. Implementing from scratch with HashMap + custom DLL demonstrates understanding of the underlying mechanics — which is what interviewers test.

How do you make an LRU cache thread-safe?

Wrap all get and put operations in a synchronized block or use a ReentrantReadWriteLock — reads (get) acquire the read lock, writes (put) acquire the write lock. For high contention, consider a sharded LRU cache where keys are partitioned across multiple independent LRU caches, each with its own lock.

What happens when cache capacity is 1?

Every put of a new key evicts the current entry. The DLL always has exactly one node between head and tail. head.next == tail.prev. This is a valid edge case that must be tested explicitly.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousRemove Nth Node from EndNext →Clone a Linked List with Random Pointers
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged