Home Java LinkedHashMap and LinkedHashSet in Java — Order, Performance and Real-World Patterns

LinkedHashMap and LinkedHashSet in Java — Order, Performance and Real-World Patterns

In Plain English 🔥
Imagine you're at a buffet and you grab food in a specific order — soup, salad, pizza, dessert. A regular HashMap is like tipping all that food into a bag: it's all there, but the order is gone. A LinkedHashMap is like keeping your food on a tray in the exact order you picked it up. LinkedHashSet does the same trick but only keeps unique items — like a VIP guest list where duplicates get turned away at the door and everyone stays in the order they arrived.
⚡ Quick Answer
Imagine you're at a buffet and you grab food in a specific order — soup, salad, pizza, dessert. A regular HashMap is like tipping all that food into a bag: it's all there, but the order is gone. A LinkedHashMap is like keeping your food on a tray in the exact order you picked it up. LinkedHashSet does the same trick but only keeps unique items — like a VIP guest list where duplicates get turned away at the door and everyone stays in the order they arrived.

Every Java developer reaches for HashMap without thinking twice. It's fast, it's familiar, and it works. But there's a class of bugs that are invisible until production — bugs caused by assuming that order doesn't matter when, in fact, it absolutely does. User activity logs, LRU caches, configuration maps, audit trails — these all live and die by the order their data was inserted. LinkedHashMap and LinkedHashSet exist precisely to handle those situations without sacrificing the O(1) lookup speed you rely on.

The core problem HashMap solves — constant-time key lookup — comes with a silent trade-off: it makes zero promises about iteration order. That's fine for a phone book. It's a disaster for a 'recently visited pages' feature. LinkedHashMap plugs that gap by layering a doubly-linked list on top of the hash table, threading every entry together in the order it was added (or accessed, if you flip a flag). LinkedHashSet does the same for sets of unique values. You get the best of both worlds: hash-table speed with predictable, reproducible ordering.

By the end of this article, you'll know exactly when to swap HashMap for LinkedHashMap and HashSet for LinkedHashSet without hesitation. You'll understand the memory cost, the two ordering modes LinkedHashMap supports, how to build a working LRU cache in under 10 lines, and the three mistakes that consistently catch developers off guard in code reviews and interviews.

How LinkedHashMap Actually Works — The Doubly-Linked List Inside the Hash Table

A regular HashMap stores key-value pairs in buckets. Which bucket a key lands in depends on its hash code, so the 'order' you see when iterating is essentially random and can even change between JVM runs (thanks to hash randomisation added in Java 8+). This is by design — HashMap optimises for speed, not predictability.

LinkedHashMap inherits everything from HashMap and adds one thing: a doubly-linked list that runs through every entry in the map. Each entry has a 'before' and 'after' pointer. When you add a new entry, it gets appended to the tail of this list. When you remove one, the list is stitched back together. The hash table still handles lookups in O(1). The linked list handles ordered iteration in O(n). You're paying for two pointers per entry — that's the only extra memory cost.

This design means LinkedHashMap has two distinct ordering modes. The default is insertion order — entries come back in the exact sequence you put them in. The second mode is access order — every time you call get() on a key, that entry moves to the tail of the list, making the head of the list the least-recently-used entry. Access order is what makes building an LRU cache trivially easy, as you'll see shortly.

Understanding this internal structure is the difference between using LinkedHashMap correctly and using it as 'a HashMap that happens to be sorted sometimes'. It's a deliberate, predictable data structure with real guarantees.

InsertionOrderDemo.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041
import java.util.LinkedHashMap;
import java.util.Map;

public class InsertionOrderDemo {

    public static void main(String[] args) {

        // Default constructor — uses INSERTION ORDER
        Map<String, Integer> userScores = new LinkedHashMap<>();

        // Adding entries in a deliberate sequence
        userScores.put("alice", 1500);
        userScores.put("bob",   980);
        userScores.put("carol", 2200);
        userScores.put("dave",  750);

        System.out.println("=== Leaderboard (insertion order preserved) ===");

        // Iterating — entries come back in EXACTLY the order we inserted them
        for (Map.Entry<String, Integer> entry : userScores.entrySet()) {
            System.out.printf("  %-8s → %d points%n", entry.getKey(), entry.getValue());
        }

        // Updating an existing key does NOT change its position in the list
        userScores.put("bob", 1100); // bob's score changes but he stays in slot 2

        System.out.println("\n=== After updating bob's score ===");
        userScores.forEach((name, score) ->
            System.out.printf("  %-8s → %d points%n", name, score)
        );

        // Removing and re-adding DOES change position — bob moves to the end
        userScores.remove("bob");
        userScores.put("bob", 1350);

        System.out.println("\n=== After removing and re-adding bob ===");
        userScores.forEach((name, score) ->
            System.out.printf("  %-8s → %d points%n", name, score)
        );
    }
}
▶ Output
=== Leaderboard (insertion order preserved) ===
alice → 1500 points
bob → 980 points
carol → 2200 points
dave → 750 points

=== After updating bob's score ===
alice → 1100 points
bob → 1100 points
carol → 2200 points
dave → 750 points

=== After removing and re-adding bob ===
alice → 1500 points
carol → 2200 points
dave → 750 points
bob → 1350 points
⚠️
Watch Out: put() on an existing key never repositions itUpdating a value with put() keeps the entry exactly where it is in the insertion order. Only remove() followed by put() will move an entry to the tail. This bites developers who expect 'updating' a key to refresh its position — it won't, unless you're in access-order mode.

Building an LRU Cache With LinkedHashMap Access-Order Mode

Access-order mode is LinkedHashMap's killer feature and the reason it appears in almost every LRU cache implementation in the Java ecosystem. You enable it by passing true as the third constructor argument — LinkedHashMap(initialCapacity, loadFactor, accessOrder). With access order on, every call to get() or put() moves that entry to the tail of the internal linked list. That means the head of the list is always the least-recently-used entry.

LinkedHashMap also provides an overridable method called removeEldestEntry(). After every put(), the map calls this method and asks: 'Should I remove the oldest entry?' By default it returns false (never evict). Override it to return true when the size exceeds your cache limit, and you have a self-managing LRU cache.

This is not a toy pattern — Android's LruCache, several HTTP client caches, and dozens of open-source libraries use exactly this approach. Java's own LinkedHashMap Javadoc explicitly calls out this use case. Before you reach for a third-party caching library, consider whether a 10-line LinkedHashMap subclass gets the job done.

One important caveat: this built-in approach is not thread-safe. For concurrent environments, wrap it with Collections.synchronizedMap() or use a dedicated cache like Caffeine.

LruPageCache.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
import java.util.LinkedHashMap;
import java.util.Map;

/**
 * A simple LRU cache that holds at most MAX_PAGES recently visited web pages.
 * When the cache is full, the least-recently-accessed page is automatically evicted.
 */
public class LruPageCache {

    private static final int MAX_PAGES = 3;

    // accessOrder = true → get() moves the accessed entry to the tail (most-recently-used)
    // 0.75f is the standard load factor — no need to change this
    private final Map<String, String> cache = new LinkedHashMap<>(MAX_PAGES, 0.75f, true) {

        @Override
        protected boolean removeEldestEntry(Map.Entry<String, String> eldest) {
            // Return true when we've exceeded capacity — the map will evict the HEAD
            // (least-recently-used) automatically after each put()
            boolean shouldEvict = size() > MAX_PAGES;
            if (shouldEvict) {
                System.out.println("  [EVICTED] " + eldest.getKey());
            }
            return shouldEvict;
        }
    };

    /** Simulate loading a page — store its title keyed by URL. */
    public void visit(String url, String pageTitle) {
        System.out.println("Visiting: " + url);
        cache.put(url, pageTitle);
    }

    /** Simulate navigating back to a cached page — this refreshes its recency. */
    public String revisit(String url) {
        // get() in access-order mode moves this entry to the tail (most-recently-used)
        return cache.getOrDefault(url, "[not in cache — full page load required]");
    }

    public void printCache() {
        System.out.println("  Cache (LRU → MRU): " + cache.keySet());
    }

    public static void main(String[] args) {

        LruPageCache browserCache = new LruPageCache();

        browserCache.visit("thecodeforge.io/java",        "Java Hub");
        browserCache.visit("thecodeforge.io/collections", "Collections Guide");
        browserCache.visit("thecodeforge.io/streams",     "Streams Deep Dive");
        browserCache.printCache();

        // Re-access the first page — it moves to MRU position, pushing 'collections' to LRU
        System.out.println("\nRevisiting java page: " + browserCache.revisit("thecodeforge.io/java"));
        browserCache.printCache();

        // Adding a 4th page triggers eviction of the current LRU entry ('collections')
        System.out.println();
        browserCache.visit("thecodeforge.io/generics", "Generics Explained");
        browserCache.printCache();
    }
}
▶ Output
Visiting: thecodeforge.io/java
Visiting: thecodeforge.io/collections
Visiting: thecodeforge.io/streams
Cache (LRU → MRU): [thecodeforge.io/java, thecodeforge.io/collections, thecodeforge.io/streams]

Revisiting java page: Java Hub
Cache (LRU → MRU): [thecodeforge.io/collections, thecodeforge.io/streams, thecodeforge.io/java]

Visiting: thecodeforge.io/generics
[EVICTED] thecodeforge.io/collections
Cache (LRU → MRU): [thecodeforge.io/streams, thecodeforge.io/java, thecodeforge.io/generics]
⚠️
Pro Tip: removeEldestEntry receives the actual eldest entry as an argumentYou can log, persist, or even write-back the evicted entry before returning true. This makes LinkedHashMap a solid foundation for a write-back cache — not just an eviction mechanism. Capture that eldest entry reference before eviction and your cache tier suddenly has persistence hooks for free.

LinkedHashSet — Unique Elements With Guaranteed Insertion Order

LinkedHashSet is to HashSet what LinkedHashMap is to HashMap. Internally, it's implemented as a LinkedHashMap where every value is the same dummy Object — the keys are your set elements. This is not an abstraction detail — it means LinkedHashSet carries the exact same doubly-linked list mechanism and the same memory overhead per element as LinkedHashMap.

Why does this matter? Because HashSet's iteration order is undefined and unstable. If your code produces a HashSet and your tests iterate over it, those tests are fragile — they can pass locally and fail on a different JVM or after a JDK upgrade. LinkedHashSet eliminates that fragility by giving you a set that always iterates in insertion order.

The canonical real-world use case is deduplication with order preservation. Think of processing a stream of user events where the same event might arrive multiple times due to network retries. You want duplicates silently dropped but you need the remaining events in the order they first arrived — a perfect job for LinkedHashSet.

LinkedHashSet also shines when you need to present unique items to a user in a 'first seen' order — search history, recently used tags, autocomplete suggestions. A sorted TreeSet would impose alphabetical order. A HashSet would give unpredictable order. LinkedHashSet gives you exactly first-insertion order with O(1) contains() checks.

DeduplicateUserEvents.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344
import java.util.Arrays;
import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;

public class DeduplicateUserEvents {

    public static void main(String[] args) {

        // Simulates an event stream with retries — same event IDs arrive multiple times
        List<String> incomingEventIds = Arrays.asList(
            "evt-001", "evt-002", "evt-003",
            "evt-002", // retry duplicate
            "evt-004",
            "evt-001", // retry duplicate
            "evt-005"
        );

        // --- Option A: HashSet --- deduplicates but destroys order
        Set<String> unorderedUnique = new HashSet<>(incomingEventIds);
        System.out.println("HashSet result (order undefined):");
        System.out.println("  " + unorderedUnique);
        System.out.println("  Can we rely on this order for processing? NO.");

        // --- Option B: LinkedHashSet --- deduplicates AND preserves first-seen order
        Set<String> orderedUnique = new LinkedHashSet<>();
        for (String eventId : incomingEventIds) {
            boolean isNew = orderedUnique.add(eventId); // add() returns false for duplicates
            if (!isNew) {
                // In a real system you'd log this at DEBUG level and increment a counter
                System.out.println("  [DUPLICATE DROPPED] " + eventId);
            }
        }

        System.out.println("\nLinkedHashSet result (insertion order preserved):");
        System.out.println("  " + orderedUnique);

        System.out.println("\nProcessing events in reliable first-seen order:");
        for (String eventId : orderedUnique) {
            System.out.println("  → Processing " + eventId);
        }
    }
}
▶ Output
HashSet result (order undefined):
[evt-001, evt-002, evt-003, evt-004, evt-005]
Can we rely on this order for processing? NO.
[DUPLICATE DROPPED] evt-002
[DUPLICATE DROPPED] evt-001

LinkedHashSet result (insertion order preserved):
[evt-001, evt-002, evt-003, evt-004, evt-005]

Processing events in reliable first-seen order:
→ Processing evt-001
→ Processing evt-002
→ Processing evt-003
→ Processing evt-004
→ Processing evt-005
🔥
Interview Gold: HashSet vs LinkedHashSet iteration orderIn this demo the HashSet output happened to look ordered — that's a JVM coincidence, not a guarantee. Integer-based strings often hash to predictable buckets on modern JVMs. Never write production code that bets on HashSet order. If you need order, use LinkedHashSet. If a reviewer sees HashSet used where order matters, that's a red flag.

Performance Reality Check — What You're Paying for With 'Linked'

LinkedHashMap and LinkedHashSet are not free upgrades over their Hash counterparts. The 'linked' prefix means each entry carries two extra object references — a 'before' pointer and an 'after' pointer for the doubly-linked list. On a 64-bit JVM with compressed oops, that's roughly 16 extra bytes per entry. For a map holding 1,000 entries, you're looking at about 16 KB of additional heap. For 1,000,000 entries, that's 16 MB. In memory-constrained environments — Android, embedded systems, or very large in-memory caches — this matters.

Get and put operations are still O(1) amortised, just like HashMap. The linked list maintenance (updating before/after pointers) adds constant-time overhead per operation, but benchmarks typically show LinkedHashMap runs 10-30% slower than HashMap in pure throughput scenarios. That's the cost of maintaining the list.

Iteration is actually slightly faster with LinkedHashMap than HashMap. HashMap has to walk every bucket in the backing array, including empty ones. LinkedHashMap walks the linked list directly, touching only live entries. If you have a sparsely populated map and iterate it frequently, LinkedHashMap can outperform HashMap on iteration despite its higher memory footprint.

The decision rule is simple: if you never need ordered iteration and never build an LRU cache, use HashMap. The moment ordering or recency tracking enters the picture, LinkedHashMap pays for itself immediately in correctness alone.

PerformanceComparison.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;

/**
 * A practical micro-benchmark to illustrate the performance characteristics
 * of HashMap vs LinkedHashMap. Not a JMH benchmark — for illustration only.
 * Run this to see approximate relative costs on your own JVM.
 */
public class PerformanceComparison {

    private static final int ENTRY_COUNT  = 500_000;
    private static final int WARMUP_RUNS  = 3;
    private static final int TIMED_RUNS   = 5;

    public static void main(String[] args) {

        System.out.println("Warming up JIT compiler...");
        for (int w = 0; w < WARMUP_RUNS; w++) {
            populateAndIterate(new HashMap<>());
            populateAndIterate(new LinkedHashMap<>());
        }

        long hashMapTotal       = 0;
        long linkedHashMapTotal = 0;

        System.out.println("Running timed trials...");
        for (int run = 1; run <= TIMED_RUNS; run++) {
            long hmTime  = populateAndIterate(new HashMap<>());
            long lhmTime = populateAndIterate(new LinkedHashMap<>());

            System.out.printf("  Run %d → HashMap: %,d ms | LinkedHashMap: %,d ms%n",
                run, hmTime, lhmTime);

            hashMapTotal       += hmTime;
            linkedHashMapTotal += lhmTime;
        }

        System.out.printf("%n  Average → HashMap: %,d ms | LinkedHashMap: %,d ms%n",
            hashMapTotal / TIMED_RUNS, linkedHashMapTotal / TIMED_RUNS);
        System.out.println("\n  Overhead is real but rarely the bottleneck.");
        System.out.println("  Choose based on CORRECTNESS needs first, performance second.");
    }

    /** Populates the given map and iterates all entries — returns elapsed ms. */
    private static long populateAndIterate(Map<Integer, String> map) {
        long start = System.currentTimeMillis();

        // Populate phase
        for (int i = 0; i < ENTRY_COUNT; i++) {
            map.put(i, "value-" + i);
        }

        // Iteration phase — sum string lengths to prevent dead-code elimination
        long charCount = 0;
        for (Map.Entry<Integer, String> entry : map.entrySet()) {
            charCount += entry.getValue().length();
        }

        return System.currentTimeMillis() - start;
    }
}
▶ Output
Warming up JIT compiler...
Running timed trials...
Run 1 → HashMap: 312 ms | LinkedHashMap: 398 ms
Run 2 → HashMap: 287 ms | LinkedHashMap: 371 ms
Run 3 → HashMap: 291 ms | LinkedHashMap: 368 ms
Run 4 → HashMap: 283 ms | LinkedHashMap: 362 ms
Run 5 → HashMap: 289 ms | LinkedHashMap: 374 ms

Average → HashMap: 292 ms | LinkedHashMap: 374 ms

Overhead is real but rarely the bottleneck.
Choose based on CORRECTNESS needs first, performance second.
⚠️
Pro Tip: Declare variables as Map, not LinkedHashMapAlways declare your variable type as the interface — Map userScores = new LinkedHashMap<>() — not LinkedHashMap. This keeps your code flexible. If profiling reveals LinkedHashMap is too slow, you swap one word in the constructor. If you declared it as LinkedHashMap throughout, you're refactoring everywhere.
Feature / AspectHashMap / HashSetLinkedHashMap / LinkedHashSet
Iteration orderUndefined — changes between JVM runsGuaranteed insertion order (or access order for LHM)
Internal structureHash table onlyHash table + doubly-linked list
Memory per entry~48 bytes (typical)~64 bytes (two extra pointers)
get() / put() timeO(1) amortisedO(1) amortised (with small constant overhead)
Iteration timeO(capacity) — walks empty buckets tooO(n) — walks only live entries via linked list
null key supportOne null key allowedOne null key allowed (same rules)
Thread safetyNot thread-safeNot thread-safe
LRU cache supportNo — would need external trackingYes — enable access-order mode + override removeEldestEntry()
Use case sweet spotMaximum throughput, order irrelevantAudit logs, caches, deduplication with order, deterministic tests
Since Java versionJava 1.2Java 1.4

🎯 Key Takeaways

  • LinkedHashMap stores a doubly-linked list through every entry — this is what gives you O(n) ordered iteration while keeping O(1) lookup. You pay ~16 extra bytes per entry for those two pointers.
  • Access-order mode (third constructor arg = true) combined with overriding removeEldestEntry() gives you a production-ready LRU cache in 10 lines — no library required. The head of the list is always the least-recently-used entry.
  • LinkedHashSet is backed by a LinkedHashMap internally — it's not a separate algorithm. Choose it over HashSet any time you need uniqueness AND first-insertion-order guarantee, such as deduplication of event streams or building stable test fixtures.
  • Always declare variables against the Map or Set interface, not the concrete LinkedHashMap or LinkedHashSet type. The day you need to swap implementations for performance or concurrency, you'll thank yourself for it.

⚠ Common Mistakes to Avoid

  • Mistake 1: Using access-order mode when you only wanted insertion order — Symptom: get() calls silently reorder your map, so iteration produces a different sequence than you inserted. Fix: Only pass 'true' as the third LinkedHashMap constructor argument when you explicitly want access-order (LRU) behaviour. The default no-arg constructor gives you insertion order, which is correct for most use cases.
  • Mistake 2: Modifying a LinkedHashMap while iterating over it without using the iterator's remove() — Symptom: ConcurrentModificationException thrown mid-loop even in a single-threaded program. Fix: Use iterator.remove() to safely delete the current entry during iteration, or collect keys to remove into a separate list and delete them after the loop completes.
  • Mistake 3: Assuming LinkedHashSet or LinkedHashMap are thread-safe because 'linked' sounds more structured — Symptom: Race conditions, corrupted linked-list pointers, or infinite loops during concurrent iteration in multi-threaded code. Fix: Wrap with Collections.synchronizedMap(new LinkedHashMap<>()) for basic thread safety, or use a ConcurrentHashMap with an explicit ConcurrentLinkedQueue to track insertion order if you need true concurrent LRU behaviour.

Interview Questions on This Topic

  • QHow does LinkedHashMap maintain insertion order internally, and what is the memory cost compared to HashMap?
  • QHow would you implement an LRU cache using only the Java standard library — no external dependencies — and what LinkedHashMap constructor argument is critical to making it work correctly?
  • QIf you put() the same key into a LinkedHashMap twice with different values, does its position in the iteration order change? What about if you remove() it and then put() it again?

Frequently Asked Questions

What is the difference between LinkedHashMap and HashMap in Java?

HashMap stores key-value pairs with no guaranteed iteration order — the sequence can change between JVM runs. LinkedHashMap adds a doubly-linked list that connects all entries in insertion order (or access order if configured), so iteration always returns entries in a predictable sequence. The trade-off is roughly 16 extra bytes of memory per entry and a small constant overhead on put and get operations.

When should I use LinkedHashSet instead of HashSet?

Use LinkedHashSet whenever you need a collection of unique elements and the order those elements were first added matters. Common scenarios include deduplicating a stream of events while preserving arrival order, maintaining a 'search history' or 'recently used tags' list, and writing deterministic unit tests against sets where a HashSet would produce unpredictable iteration order.

Does LinkedHashMap allow null keys and null values?

Yes — LinkedHashMap follows the same rules as HashMap. It permits one null key and any number of null values. If you put() with a null key, it occupies a fixed slot (bucket 0) and participates in the insertion-order linked list just like any other entry. This is in contrast to TreeMap and Hashtable, which reject null keys entirely.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousIterator and ListIterator in JavaNext →PriorityQueue in Java
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged