Home Cheat Sheets How HashMap Works Internally
📋 CHEAT SHEET

How HashMap Works Internally

Illustrated diagram of Java HashMap internals — hashing, bucket array, collision handling with linked list and TreeNode.

Core Internals

ConceptValue / BehaviourNotes
Default initial capacity16 bucketsAlways a power of 2
Default load factor0.75Resize when 75% full
Resize thresholdcapacity × load factor16 × 0.75 = 12 entries triggers resize
Resize multiplierNew capacity = old × 2
Bucket structure (< 8 entries)Singly linked listO(n) worst-case lookup
Bucket structure (≥ 8 entries)Red-Black Tree (TreeNode)O(log n) worst-case lookup
Tree → list conversion threshold< 6 entries after removalUntreeify back to linked list
Null keyAllowed — stored in bucket 0Only one null key permitted
Null valuesAllowed — multipleUse containsKey() to distinguish null vs absent

Hashing Pipeline

StepOperationCode / Detail
1. hashCode()Called on the keyUser-defined or Object default
2. Spread hashhash ^ (hash >>> 16)XORs upper 16 bits into lower to reduce collisions
3. Bucket indexhash & (capacity - 1)Fast modulo — works because capacity is power of 2
4. Key equalityequals() on existing keysMust override both hashCode() and equals() together
5. Insert / updateReplace value if key existsReturns old value on put(), null if new key

Time Complexity

OperationAverage CaseWorst Case (all keys in one bucket)
get(key)O(1)O(n) list / O(log n) tree
put(key, value)O(1)O(n) list / O(log n) tree
remove(key)O(1)O(n) list / O(log n) tree
containsKey(key)O(1)O(n) list / O(log n) tree
containsValue(value)O(n)O(n) — full scan always
resize()O(n)Rehashes all entries into new array

Common Gotchas

GotchaProblemFix
Mutable keyhashCode() changes after insert — key lost in wrong bucketOnly use immutable objects as keys (String, Integer, enums)
Missing hashCode()All instances hash to same bucket → O(n) every lookupAlways override hashCode() when overriding equals()
ConcurrentModificationExceptionStructural change during iterationUse ConcurrentHashMap or iterate with Iterator.remove()
Not thread-safeRace conditions on concurrent put()Use ConcurrentHashMap — HashMap is NOT synchronized
Initial capacity too smallFrequent resizes degrade performanceSet capacity = expectedSize / loadFactor + 1
LinkedHashMap vs HashMapHashMap has no iteration order guaranteeUse LinkedHashMap for insertion-order iteration

HashMap vs Alternatives

ClassOrdered?Thread-Safe?Null KeysUse When
HashMapNoNoYes (1)Default — single-threaded, unordered
LinkedHashMapInsertion orderNoYes (1)Need predictable iteration order
TreeMapSorted by keyNoNoNeed sorted keys — uses Red-Black Tree
ConcurrentHashMapNoYesNoMulti-threaded access without external locking
HashtableNoYes (synchronized)NoLegacy — prefer ConcurrentHashMap instead
EnumMapEnum orderNoNoKeys are enum constants — extremely fast

ConcurrentHashMap Internals

ConceptJava 8+ BehaviourNotes
Locking strategySynchronized on individual bucket head (not whole map)Java 7 used segment locks (16 segments); Java 8 uses per-bin CAS + synchronized
Read operationsLock-free using volatile readsget() never locks — reads are always concurrent
Write operationsCAS for empty bins; synchronized for non-emptyOnly the affected bin is locked — high concurrency
Null keys/valuesNot allowed — throws NullPointerExceptionNull would be ambiguous with 'key not present'
size() accuracyApproximate — uses distributed LongAdder cellsUse mappingCount() for large maps — returns long not int
putIfAbsent / computeAtomic — safe for counter/accumulator patternscomputeIfAbsent(key, k -> new ArrayList<>()).add(val)
TreeificationSame as HashMap — 8 entries → Red-Black TreeApplies per-bin independently

Sizing Formula

GoalFormulaExample
Avoid resize for n entriesinitialCapacity = (int)(n / loadFactor) + 1100 entries, LF=0.75 → capacity = 134
Default resize triggersize > capacity × loadFactorDefault: 16 × 0.75 = 12 entries → resize to 32
Memory per entry (approx)~48 bytes (key obj + value obj + Entry node)1M entries ≈ 48MB overhead (excluding key/value sizes)
Custom load factor trade-offLower LF → fewer collisions, more memoryLF=0.5 for read-heavy; LF=0.9 for memory-constrained
Power-of-2 capacityHashMap always rounds up to next power of 2new HashMap<>(100) → internal capacity = 128
More Cheat Sheets
Java Collections Cheat SheetJava Streams API Cheat SheetPython Built-in Functions Cheat SheetSQL Joins Cheat SheetDocker Commands Cheat SheetJVM Memory Model DiagramMicroservices Cheat SheetPandas Cheat SheetData Structures & Algorithms Cheat Sheet