LRU Cache Implementation Using a Doubly Linked List and HashMap
- LRU Cache provides O(1) get and put by combining a hash map (lookup) with a DLL (order maintenance).
- DLL head = most recently used (MRU), tail = least recently used (LRU).
- On get: move accessed node to head. On put: insert at head, evict tail if over capacity.
- Core data structures: HashMap for O(1) key lookup + Doubly Linked List for O(1) order maintenance.
- DLL head = most recently used (MRU). DLL tail = least recently used (LRU).
- On get: move accessed node to head. On put: insert at head, evict tail.prev if over capacity.
- Sentinel head/tail nodes eliminate all null-check edge cases in insert/remove.
- Production risk: forgetting to update both the map and the DLL during eviction causes memory leaks or stale references.
- Real-world: CPU caches, Redis, CDN edge nodes, database buffer pools all use LRU eviction.
Memory grows unboundedly — eviction appears broken.
jcmd <pid> GC.heap_dump /tmp/heap.hprofjmap -histo <pid> | grep Node (count Node instances)NullPointerException in cache get/put — sporadic crashes.
jcmd <pid> Thread.printgrep -r 'sentinel\|dummy' src/ (verify sentinel initialization)Cache hit rate drops to near zero after deployment.
Check monitoring dashboard for hit_rate metricjcmd <pid> GC.class_histogram (check for unexpected object growth)Production Incident
_dll_size() in debug builds.
3. Added a metric: cache.dll.size vs cache.map.size — alert if they diverge.
4. Added integration tests that verify DLL and HashMap sizes match after 100K put operations with capacity=100.Production Debug GuideSymptom-driven investigation paths for cache-related incidents.
get() moves the accessed node to the head (MRU position). If the move-to-head logic is broken, recently accessed items are evicted prematurely.put() for an existing key removes the old node from the DLL before inserting the new one. Missing the remove step creates duplicate DLL entries.put() with an existing key updates the node's value AND moves it to the head. Updating the value without moving to head preserves stale ordering.LRU (Least Recently Used) eviction is the industry-standard cache replacement policy. It works because most workloads exhibit temporal locality — data accessed recently is more likely to be accessed again soon. CPU L1/L2 caches, Redis, database buffer pools, and CDN edge nodes all rely on LRU.
The challenge: both get and put must run in O(1) time. A plain array requires O(n) to find and move elements. A single linked list requires O(n) to find a node by key. The solution pairs a HashMap (O(1) lookup by key) with a doubly linked list (O(1) insertion/deletion at any known node). Neither structure alone suffices.
A common misconception is that LinkedHashMap alone implements LRU. It can, but only with accessOrder=true and careful override of removeEldestEntry. A from-scratch implementation demonstrates understanding of the pointer manipulation that LinkedHashMap hides internally.
What is LRU Cache? — Plain English
An LRU (Least Recently Used) Cache evicts the least recently accessed item when the cache is full and a new item must be inserted. It is a fixed-capacity key-value store where both get and put run in O(1).
Real-world use: CPU caches, DNS resolver caches, web browser caches, database buffer pools. When storage is limited, you want to keep the items most likely to be accessed again — which are usually the recently used ones.
Implementation: combine a hash map (for O(1) key lookup) with a doubly linked list (for O(1) order maintenance). The DLL head = most recently used, DLL tail = least recently used.
- Your desk holds 5 folders. You work on folder A, then B, then C.
- When folder D arrives and the desk is full, you evict the oldest untouched folder.
- The folders you touched recently are the ones you will likely need again.
- This temporal locality is why LRU outperforms random eviction in most workloads.
How LRU Cache Works — Step by Step
- hash map: key -> node in DLL
- DLL: maintains access order (head=MRU, tail=LRU)
- Use sentinel head and tail nodes to simplify edge cases
get(key) O(1): 1. If key not in map: return -1. 2. Move the node to the front of the DLL (most recently used). 3. Return the node's value.
put(key, value) O(1): 1. If key already in map: update value, move node to front. 2. Else: create new node, insert at front, add to map. 3. If cache exceeds capacity: remove the LRU node (tail.prev), delete from map.
- HashMap alone: O(1) lookup but no ordering — cannot find the LRU item in O(1).
- DLL alone: O(1) insert/remove at known positions but O(n) to find a node by key.
- Combined: HashMap finds the node in O(1). DLL maintains eviction order in O(1).
- The HashMap value is a pointer to the DLL node — this is the bridge between the two structures.
Worked Example — Tracing the Algorithm
LRU Cache capacity=3. Operations: put(1,A), put(2,B), put(3,C), get(1), put(4,D), get(2).
put(1,A): cache=[1:A]. DLL: head<->1<->tail. put(2,B): cache=[1:A, 2:B]. DLL: head<->2<->1<->tail. (2 is MRU) put(3,C): cache=[1:A, 2:B, 3:C]. DLL: head<->3<->2<->1<->tail. get(1): 1 exists. Move 1 to front. DLL: head<->1<->3<->2<->tail. Return A. put(4,D): cache full (size=3). Insert 4, evict LRU=tail.prev=2. Remove 2 from map and DLL. Insert 4 at front. cache=[1:A, 3:C, 4:D]. DLL: head<->4<->1<->3<->tail. get(2): 2 not in cache. Return -1. (Was evicted.)
Key insight: get(1) promoted key 1 to MRU, saving it from eviction when key 4 was inserted.
Implementation
Sentinel head and tail nodes eliminate all null-pointer edge cases. _remove unlinks a node by connecting its neighbours directly. _insert_front places a node right after the sentinel head (MRU position). get moves the accessed node to the front. put inserts at the front and evicts tail.prev when over capacity. The hash map (self.map) gives O(1) node lookup; the DLL gives O(1) order updates.
package io.thecodeforge.cache; import java.util.HashMap; import java.util.Map; /** * LRU Cache using HashMap + Doubly Linked List. * O(1) get and put. Sentinel nodes eliminate null-check edge cases. */ public class LRUCache { // --- Internal node for the doubly linked list --- static class Node { int key; int value; Node prev; Node next; Node(int key, int value) { this.key = key; this.value = value; } } private final int capacity; private final Map<Integer, Node> map; private final Node head; // sentinel — head.next is MRU private final Node tail; // sentinel — tail.prev is LRU public LRUCache(int capacity) { this.capacity = capacity; this.map = new HashMap<>(); // Sentinel nodes: never null, always present this.head = new Node(0, 0); this.tail = new Node(0, 0); head.next = tail; tail.prev = head; } // --- Remove a node from the DLL (O(1)) --- private void remove(Node node) { node.prev.next = node.next; node.next.prev = node.prev; } // --- Insert a node at the front (MRU position) (O(1)) --- private void insertFront(Node node) { node.next = head.next; node.prev = head; head.next.prev = node; head.next = node; } /** * Get value by key. Moves accessed node to MRU position. * Returns -1 if key not found. */ public int get(int key) { if (!map.containsKey(key)) { return -1; } Node node = map.get(key); remove(node); // unlink from current position insertFront(node); // promote to MRU return node.value; } /** * Insert or update key-value pair. * If cache is full, evicts the LRU item first. */ public void put(int key, int value) { if (map.containsKey(key)) { // Key exists: update value and promote to MRU Node existing = map.get(key); existing.value = value; remove(existing); insertFront(existing); } else { // New key: create node and insert at MRU Node newNode = new Node(key, value); map.put(key, newNode); insertFront(newNode); // Evict if over capacity if (map.size() > capacity) { Node lru = tail.prev; // least recently used remove(lru); // remove from DLL FIRST map.remove(lru.key); // then remove from map } } } // --- Debug: verify DLL and map are in sync --- public int dllSize() { int count = 0; Node current = head.next; while (current != tail) { count++; current = current.next; } return count; } public boolean isConsistent() { return map.size() == dllSize(); } public static void main(String[] args) { LRUCache cache = new LRUCache(3); cache.put(1, 100); cache.put(2, 200); cache.put(3, 300); System.out.println("After 3 puts: " + cache.map.keySet()); System.out.println("DLL size: " + cache.dllSize() + ", Map size: " + cache.map.size()); System.out.println("Consistent: " + cache.isConsistent()); System.out.println("get(1): " + cache.get(1)); // promotes 1 to MRU cache.put(4, 400); // evicts 2 (LRU) System.out.println("After put(4): " + cache.map.keySet()); System.out.println("get(2): " + cache.get(2)); // -1 (evicted) System.out.println("get(1): " + cache.get(1)); // 100 (was promoted) System.out.println("Consistent: " + cache.isConsistent()); // Edge case: capacity 1 LRUCache tiny = new LRUCache(1); tiny.put(1, 10); tiny.put(2, 20); // evicts 1 System.out.println("\nCapacity 1 cache:"); System.out.println("get(1): " + tiny.get(1)); // -1 System.out.println("get(2): " + tiny.get(2)); // 20 System.out.println("Consistent: " + tiny.isConsistent()); } }
DLL size: 3, Map size: 3
Consistent: true
get(1): 100
After put(4): [1, 3, 4]
get(2): -1
get(1): 100
Consistent: true
Capacity 1 cache:
get(1): -1
get(2): 20
Consistent: true
| Policy | Eviction Rule | Best For | Weakness |
|---|---|---|---|
| LRU (Least Recently Used) | Evict the item not accessed for the longest time | Temporal locality workloads (web caches, session stores) | Thrashes on scan patterns — a full scan evicts all hot data |
| LFU (Least Frequently Used) | Evict the item accessed the fewest times | Frequency-based workloads (DNS caching, popular content) | Slow to adapt — a previously popular item may lingerorder). |
| FIFO (First In First Out) | Evict the oldest inserted item regardless of access | Simple bounded queues, streaming buffers | Ignores access patterns — a frequently accessed item is evicted if it was inserted early |
| ARC (Adaptive Replacement Cache) | Dynamically balances LRU and LFU based on workload | Mixed workloads with unpredictable access patterns | More complex to implement — requires two LRU lists and two ghost lists |
| LRU-K | Evict based on the K-th most recent access time | Database buffer pools (K=2 is common) | Higher per-item metadata overhead than plain LRU |
🎯 Key Takeaways
- LRU Cache provides O(1) get and put by combining a hash map (lookup) with a DLL (order maintenance).
- DLL head = most recently used (MRU), tail = least recently used (LRU).
- On get: move accessed node to head. On put: insert at head, evict tail if over capacity.
- Sentinel head/tail nodes eliminate null checks in insert and remove operations.
- Python's OrderedDict or Java's LinkedHashMap can also implement LRU in fewer lines.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat two data structures power an O(1) LRU cache?
- QWhy are sentinel head and tail nodes useful in the LRU cache DLL?
- QHow does the LRU cache handle a put operation when the cache is at capacity?
- QWalk me through what happens when you call
get()on a key that exists — what exact pointer operations occur? - QHow would you implement a thread-safe LRU cache? What locking strategy would you use?
- QHow does LRU differ from LFU, and when would you choose one over the other?
- QWhat happens if the cache capacity is 1? Trace through a sequence of put(1,A), put(2,B), get(1).
- QHow would you extend this to support TTL (time-to-live) expiration in addition to LRU eviction?
Frequently Asked Questions
Why use sentinel head and tail nodes?
Sentinel nodes eliminate null checks in insert and remove operations. Without them, every insert/remove must check 'is this node the head?' and 'is this node the tail?' With sentinels, head.next is always the MRU node and tail.prev is always the LRU node — the code is cleaner and less error-prone.
What is the difference between LRU, LFU, and FIFO caches?
LRU (Least Recently Used) evicts the item not accessed for the longest time. LFU (Least Frequently Used) evicts the item accessed the fewest times — useful when access frequency is a better predictor than recency. FIFO (First In First Out) evicts the oldest inserted item regardless of access pattern. LRU is the most common in practice because it adapts well to temporal locality.
Can Python's OrderedDict implement an LRU cache?
Yes. Python's collections.OrderedDict maintains insertion order and supports move_to_end(key) in O(1). Use it as: get moves key to end (MRU), put moves to end and pops the first item (LRU) if over capacity. Python's functools.lru_cache decorator implements this pattern exactly.
How is Java's LinkedHashMap related to LRU cache?
Java's LinkedHashMap can implement LRU by setting accessOrder=true in the constructor and overriding removeEldestEntry(). However, this hides the pointer manipulation internally. Implementing from scratch with HashMap + custom DLL demonstrates understanding of the underlying mechanics — which is what interviewers test.
How do you make an LRU cache thread-safe?
Wrap all get and put operations in a synchronized block or use a ReentrantReadWriteLock — reads (get) acquire the read lock, writes (put) acquire the write lock. For high contention, consider a sharded LRU cache where keys are partitioned across multiple independent LRU caches, each with its own lock.
What happens when cache capacity is 1?
Every put of a new key evicts the current entry. The DLL always has exactly one node between head and tail. head.next == tail.prev. This is a valid edge case that must be tested explicitly.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.