synchronized Keyword in Java: Intrinsic Locks and Thread Safety
- synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee — it solves two concurrency problems, not one. Synchronizing only writes but not reads is insufficient.
- The happens-before guarantee is the most overlooked property: without synchronized on reads, one thread's writes may be invisible to another even if the write itself was synchronized.
- Always lock on a private final Object — never 'this' (exposes lock to external code), never a mutable field (reference can change), never String literals or boxed Integers (JVM-wide sharing).
- synchronized is Java's built-in mutex — it ensures only one thread executes a critical section at a time
- Every Java object has an intrinsic lock (monitor) — a thread must acquire it before entering a synchronized block
- synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee
- Performance cost: uncontended locks are cheap (lightweight CAS on JDK 15+), contended locks are expensive (OS-level blocking)
- Biggest production trap: synchronizing on the wrong object — if threads lock on different monitors, synchronization is silently bypassed with no error or warning
- For simple atomic operations (counters, flags), AtomicInteger outperforms synchronized by avoiding lock overhead entirely via hardware CAS instructions
Application appears hung or deadlocked
jstack <pid> > thread_dump.txtjcmd <pid> Thread.print > thread_dump2.txtHigh lock contention — threads spending significant time waiting
jcmd <pid> JFR.start duration=60s filename=lock_profile.jfr settings=profilejcmd <pid> JFR.dump filename=lock_profile.jfrNeed to see which threads hold which locks right now
jstack -l <pid> | grep -A5 'locked'jcmd <pid> Thread.print -l | grep -B3 'ownable synchronizer'Production Incident
Production Debug GuideCommon symptoms when synchronized goes wrong
The synchronized keyword is Java's native mechanism for mutual exclusion. It solves two distinct problems that are easy to conflate but genuinely separate: race conditions (multiple threads corrupting shared state by interleaving their operations) and memory visibility (one thread's writes being invisible to other threads due to CPU caching and compiler reordering).
Every Java object carries an intrinsic lock — also called a monitor. When a thread enters a synchronized method or block, it acquires that monitor. All other threads attempting to acquire the same monitor are blocked until the owner releases it. This mechanism also establishes a happens-before relationship: all writes performed by the releasing thread are guaranteed to be visible to the thread that subsequently acquires the same monitor.
The cost is contention. Under high concurrency, synchronized blocks become bottlenecks because blocked threads consume no CPU but still delay throughput — and every context switch costs time. Understanding when synchronized is the right tool, and when lock-free alternatives like AtomicInteger, LongAdder, or concurrent collections are better choices, is what separates code that passes code review from code that runs well in production under real load.
In 2026, with virtual threads (JEP 444, stable since JDK 21) changing the cost model of blocking operations, understanding the fundamentals of Java's synchronisation model matters even more. Virtual threads pin to carrier threads when blocked on a synchronized monitor, which can exhaust the carrier thread pool under contention. The recommendation for new concurrent code in JDK 21+ is to prefer ReentrantLock over synchronized when you expect contention — but you need to understand synchronized first to know when that trade-off applies.
What Is the synchronized Keyword and Why Does It Exist?
The synchronized keyword is Java's native implementation of an intrinsic lock, also known as a monitor. It was designed to solve two distinct concurrency problems that frequently occur together: race conditions, where multiple threads interleave their operations on shared data and corrupt it, and memory visibility, where writes by one thread are not guaranteed to be seen by other threads without explicit coordination.
Every Java object carries a monitor — a JVM-managed structure that tracks which thread currently owns the lock, which threads are blocked waiting to acquire it, and which threads are waiting inside it via wait(). When a thread enters a synchronized method or block, it acquires that object's monitor. If the monitor is free, acquisition succeeds immediately. If another thread holds it, the current thread transitions to the BLOCKED state and waits until the owner releases it on exit.
The second problem synchronized solves — memory visibility — is the one that trips up developers who think about synchronization only in terms of mutual exclusion. Modern CPUs have per-core caches, and compilers reorder instructions for performance. Without explicit synchronisation, a value written by thread A may remain in thread A's CPU cache and never be flushed to main memory. Thread B reading the same variable may get a stale cached value. synchronized solves this via the happens-before guarantee: everything thread A wrote before releasing the monitor is guaranteed visible to thread B when it acquires the same monitor. This is why synchronising only writes but not reads is always wrong — you need the happens-before relationship on both sides.
In JDK 21 and later, there is an important interaction to be aware of with virtual threads. When a virtual thread blocks on a synchronized monitor, it pins to its carrier thread rather than unmounting and freeing the carrier. Under high contention with many virtual threads, this can exhaust the carrier thread pool. The practical guidance for JDK 21+ is to prefer ReentrantLock over synchronized in performance-sensitive code paths that may be called from virtual threads.
package io.thecodeforge.concurrency; import java.util.concurrent.atomic.AtomicInteger; /** * Thread-safe counter implementation demonstrating three approaches: * synchronized method, synchronized block, and lock-free AtomicInteger. * * io.thecodeforge: Production counters should use AtomicInteger for * single-variable operations. synchronized is shown here for educational * contrast — use it when protecting multi-step compound operations. */ public class ForgeCounter { // ---- Approach 1: plain int — NOT thread-safe ---- // count++ compiles to three bytecode instructions: load, add, store. // Two threads can interleave between any two of them and lose an update. private int unsafeCount = 0; // ---- Approach 2: synchronized method — thread-safe, coarser granularity ---- private int syncCount = 0; // ---- Approach 3: AtomicInteger — thread-safe, no lock overhead ---- // Preferred for single-variable counters in production. private final AtomicInteger atomicCount = new AtomicInteger(0); // ---- Private final lock object — never synchronize on 'this' in production ---- // Using 'this' exposes your lock to external code. A private final Object // is invisible outside this class and its reference never changes. private final Object lock = new Object(); /** * Synchronized method — uses the 'this' instance monitor. * All synchronized methods on this instance share the same lock. * Only one can execute at a time across all threads. */ public synchronized void incrementSync() { syncCount++; // now atomic: only one thread enters at a time } /** * Synchronized block on a private lock — preferred over synchronized method. * Minimises the locked region: only the write to syncCount is protected. * Any non-shared computation happens outside the lock. */ public void incrementWithBlock(String callerInfo) { // Non-shared computation outside the lock — no need to hold it here String logLine = "Increment requested by: " + callerInfo; synchronized (lock) { // Only the critical section is locked syncCount++; } // Logging outside the lock — no need to hold the monitor for this System.out.println(logLine + " completed."); } /** * Lock-free increment using AtomicInteger. * Uses a hardware CAS instruction — no lock, no blocking, scales better. * Preferred for simple counter increments in production. */ public int incrementAtomic() { return atomicCount.incrementAndGet(); } /** * Synchronized read — required if writes are synchronized. * Without synchronizing the read, the happens-before guarantee does not apply * and the reader may see a stale cached value from its CPU core. */ public synchronized int getSyncCount() { return syncCount; } public int getAtomicCount() { return atomicCount.get(); // volatile read — always sees the latest value } /** * Demonstrates reentrancy: a thread holding the monitor can re-acquire it. * Without reentrancy, this call chain would deadlock. */ public synchronized void reentrantOuter() { System.out.println("Outer: hold count is now 1"); reentrantInner(); // re-acquires the same monitor — hold count becomes 2 System.out.println("Outer: hold count is back to 1 after inner returns"); } public synchronized void reentrantInner() { System.out.println("Inner: same thread re-acquired the lock without deadlock"); // hold count returns to 1 when this method exits } }
// Increment requested by: Thread-1 completed.
// Inner: same thread re-acquired the lock without deadlock
// Outer: hold count is back to 1 after inner returns
// Thread-safe: all increments are counted, no lost updates.
- Every Java object has exactly one monitor (intrinsic lock) — it does not matter how many synchronized methods or blocks reference it
- Entering synchronized means acquiring the monitor — threads that cannot acquire it immediately transition to BLOCKED state and wait
- Exiting synchronized — whether normally or via an uncaught exception — releases the monitor and wakes one waiting thread
- The happens-before guarantee means all writes by the releasing thread are guaranteed visible to the next thread that acquires the same monitor
- Reentrancy: a thread already holding a monitor can re-enter any synchronized block on the same monitor — the JVM tracks a hold count, not a simple owned/free flag
Common Mistakes and How to Avoid Them
Even experienced engineers stumble on synchronization nuances. The failures tend to be silent — the code compiles and runs, but provides no actual thread safety.
Over-synchronization is the performance failure. Locking an entire method that performs database queries, HTTP calls, or logging serialises all of that work — threads wait while one thread does work that could have happened concurrently. The fix is to minimise the locked region to only the lines that actually read or write shared mutable state. Everything else — I/O, computation, logging — should happen outside the synchronized block.
Synchronizing on the wrong object is the correctness failure, and it is the most dangerous because it is completely silent. If method A locks on 'this' and method B locks on a private Object, they protect the same shared data but use different monitors. Threads executing A and B concurrently will not block each other. No error is thrown. The data corruption happens as if there were no synchronization at all. The fix is to establish one canonical lock object for each unit of shared state and use it everywhere.
Synchronizing on 'this' exposes your lock to external code. Any caller who holds a reference to your object can synchronize on it for their own unrelated purpose, accidentally contending with your internal operations or causing deadlock. A private final Object lock is invisible outside the class and its reference never changes — these two properties are what make it safe.
Never synchronize on String literals or boxed Integer constants. Java interns String literals and caches small Integer values (-128 to 127) — these are shared across the entire JVM. Code in a completely unrelated class synchronizing on Integer.valueOf(0) is locking on the same object as your code. The resulting deadlocks are nearly impossible to diagnose because the contending code has no apparent relationship.
package io.thecodeforge.concurrency; import java.util.ArrayList; import java.util.List; /** * Demonstrates correct and incorrect lock objects. * io.thecodeforge: The lock object identity is the entire mechanism. * If threads lock on different objects, synchronization is silently bypassed. */ public class LockingPitfall { // ---- CORRECT: private final Object as the lock ---- // Private: external code cannot synchronize on it. // Final: the reference never changes — threads always lock on the same object. private final Object forgeLock = new Object(); private final List<String> sharedList = new ArrayList<>(); /** * CORRECT: both methods use forgeLock — mutual exclusion is guaranteed. */ public void addDataCorrect(String data) { synchronized (forgeLock) { sharedList.add(data); } } public int getSizeCorrect() { synchronized (forgeLock) { return sharedList.size(); // must synchronize reads too } } /** * WRONG #1: mixing 'this' and forgeLock for the same shared data. * addDataCorrect() and this method do NOT block each other. * Both can run concurrently, corrupting sharedList. */ public void addDataWrongLock(String data) { synchronized (this) { // different monitor from forgeLock sharedList.add(data); // no mutual exclusion with addDataCorrect } } /** * WRONG #2: synchronizing on a mutable field. * If this.listLock is reassigned between method calls, threads lock * on different objects. The reassignment can happen at any time. */ private Object mutableLock = new Object(); // NOT final — dangerous public void addDataMutableLock(String data) { synchronized (mutableLock) { // mutableLock may have been reassigned sharedList.add(data); // no longer safe } } public void reassignLock() { mutableLock = new Object(); // now threads will lock on different objects } /** * WRONG #3: synchronizing on a String literal. * Java interns String literals — "forge-lock" is the same object everywhere * in the JVM. Any other class using synchronized("forge-lock") contends * with this method, even if it has nothing to do with this class. */ public void addDataStringLock(String data) { synchronized ("forge-lock") { // NEVER do this sharedList.add(data); } } /** * Demonstrates the lock identity diagnostic. * Use System.identityHashCode to verify all synchronized blocks * reference the same object instance. */ public void diagnoseLockIdentity() { System.out.println("forgeLock identity: " + System.identityHashCode(forgeLock)); System.out.println("this identity: " + System.identityHashCode(this)); // If these are different, mixed use of forgeLock and 'this' is a bug. } }
// this identity: 2018699554
// Different identity hashes confirm that forgeLock and 'this' are separate monitors.
// Any code path using synchronized(this) does NOT block code using synchronized(forgeLock).
Object() — never synchronize on 'this', which is visible to external codeJVM Lock Implementation: Lightweight and Heavyweight Locking
The JVM does not use a single locking strategy. It implements an escalation model that minimises the cost of synchronization when contention is low or absent, and escalates to OS-level primitives only when necessary. Understanding this model explains why synchronized is cheaper than most developers assume — until contention appears.
On JDK 15 and earlier, the JVM implemented three tiers: biased locking (zero-cost for single-thread access), lightweight locking (CAS-based for low contention), and heavyweight locking (OS mutex for high contention). Biased locking was removed in JDK 15 via JEP 374 because its revocation mechanism was a source of JVM pauses and complexity that was rarely justified in modern concurrent applications. As of JDK 21, the deprecation of biased locking is complete and the JVM starts with lightweight locking from the first acquisition.
Lightweight locking activates when a thread acquires an uncontended monitor. The JVM stores the current thread's identity in the object's Mark Word using a Compare-And-Swap (CAS) instruction — a single atomic CPU operation that either succeeds or fails without any kernel involvement. If the CAS succeeds, the lock is acquired. If another thread attempts to acquire the lock while it is held, the JVM may spin briefly (adaptive spinning) — this is a bet that the current owner will release soon and the cost of a context switch is not worth paying.
Heavyweight locking is the fallback when adaptive spinning does not resolve contention. The JVM inflates the lock to a full OS-level mutex. The contending thread is suspended via the OS scheduler and added to a wait queue. When the owner releases the lock, the OS is asked to wake a waiting thread. Context switches cost 1–10 microseconds each, and this overhead compounds under many contending threads — each additional thread adds both its own blocking time and CPU cache pressure from the scheduling activity.
For JDK 21 virtual threads specifically: when a virtual thread blocks on a synchronized monitor during lightweight locking, it pins to its carrier thread. The carrier thread is blocked for the duration. If many virtual threads pin simultaneously, the ForkJoinPool carrier thread pool becomes saturated, which is functionally equivalent to running out of threads in a traditional thread pool. This is the concrete reason the JDK 21 documentation recommends ReentrantLock over synchronized for code that runs on virtual threads and expects any meaningful contention.
package io.thecodeforge.concurrency; import java.util.concurrent.locks.ReentrantLock; /** * Demonstrates JVM lock escalation and the practical performance difference * between synchronized (blocking) and ReentrantLock (interruptible, timeout). * * io.thecodeforge: Use JFR to observe lock contention in production. * Command: jcmd <pid> JFR.start duration=60s filename=locks.jfr settings=profile */ public class LockEscalationDemo { private final Object monitor = new Object(); private final ReentrantLock reentrantLock = new ReentrantLock(); private int sharedCounter = 0; /** * Uncontended access: lightweight locking (CAS) — fast. * A single thread repeatedly acquiring and releasing the same lock * will use only CAS operations — no OS involvement. */ public void singleThreadAccess() { for (int i = 0; i < 1_000_000; i++) { synchronized (monitor) { sharedCounter++; } } System.out.println("Single-thread result: " + sharedCounter); } /** * Contended access: JVM escalates to heavyweight locking under pressure. * 16 threads competing for the same monitor — context switches dominate. * Compare throughput to the lock-free version below. */ public void heavilyContendedSync() throws InterruptedException { sharedCounter = 0; Thread[] threads = new Thread[16]; for (int t = 0; t < threads.length; t++) { threads[t] = new Thread(() -> { for (int i = 0; i < 100_000; i++) { synchronized (monitor) { sharedCounter++; } } }, "ForgeWorker-" + t); } long start = System.nanoTime(); for (Thread thread : threads) thread.start(); for (Thread thread : threads) thread.join(); long elapsed = System.nanoTime() - start; System.out.printf("synchronized (16 threads): result=%d, elapsed=%dms%n", sharedCounter, elapsed / 1_000_000); } /** * ReentrantLock with tryLock — avoids blocking indefinitely. * If the lock is unavailable after 100ms, the thread records a miss * and moves on rather than blocking the caller thread. * * This is the pattern to use when synchronized's lack of timeout support * would cause unacceptable latency spikes under contention. */ public void tryLockExample() throws InterruptedException { Thread[] threads = new Thread[4]; for (int t = 0; t < threads.length; t++) { final int id = t; threads[t] = new Thread(() -> { try { if (reentrantLock.tryLock(100, java.util.concurrent.TimeUnit.MILLISECONDS)) { try { sharedCounter++; System.out.println("Thread-" + id + " acquired lock and incremented"); } finally { reentrantLock.unlock(); // always unlock in finally } } else { System.out.println("Thread-" + id + " could not acquire lock in 100ms — skipping"); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }, "ForgeTryLock-" + id); } for (Thread thread : threads) thread.start(); for (Thread thread : threads) thread.join(); } /** * Diagnosing lock contention at runtime. * Use this pattern in a health-check or metrics endpoint. */ public void reportLockContention() { System.out.println("ReentrantLock queue length: " + reentrantLock.getQueueLength()); System.out.println("ReentrantLock hold count: " + reentrantLock.getHoldCount()); System.out.println("Is lock held by any thread: " + reentrantLock.isLocked()); // synchronized monitors cannot be inspected this way — another advantage of ReentrantLock } }
synchronized (16 threads): result=1600000, elapsed=312ms
Thread-0 acquired lock and incremented
Thread-1 acquired lock and incremented
Thread-2 could not acquire lock in 100ms — skipping
Thread-3 acquired lock and incremented
ReentrantLock queue length: 0
ReentrantLock hold count: 0
Is lock held by any thread: false
| Aspect | synchronized | AtomicInteger / LongAdder | ReentrantLock |
|---|---|---|---|
| Mutual exclusion | Yes — one thread in the critical section at a time | No — single-variable atomicity only, not multi-step operations | Yes — same mutual exclusion guarantee as synchronized |
| Memory visibility | Yes — full happens-before guarantee on acquire and release | Yes — volatile semantics on every read and write | Yes — full happens-before guarantee on lock and unlock |
| Performance (uncontended) | Fast — lightweight CAS on JDK 15+ (not zero-cost but close) | Fastest — single CAS instruction, no lock data structure | Similar to synchronized — slightly more overhead from the AQS framework |
| Performance (high contention) | Degrades — heavyweight OS blocking, context switches compound | Scales well — LongAdder stripes across CPU cores to reduce CAS contention | Similar degradation to synchronized, but tryLock avoids indefinite blocking |
| Timeout support | No — blocked threads wait indefinitely | Not applicable — no blocking | Yes — tryLock(long, TimeUnit) returns false instead of blocking |
| Interruptible | No — BLOCKED threads cannot be interrupted | Not applicable | Yes — lockInterruptibly() responds to Thread.interrupt() |
| Fairness policy | No guarantee — any waiting thread may acquire after release | Not applicable | Configurable — new ReentrantLock(true) enforces FIFO ordering (with overhead) |
| Condition variables | Built-in — wait(), notify(), notifyAll() on any object | Not applicable | Explicit — lock.newCondition() for multiple independent wait sets |
| Contention visibility | None — no API to inspect waiting threads or hold count | Not applicable | Full — getQueueLength(), getHoldCount(), isLocked() for monitoring and alerting |
| Virtual thread pinning (JDK 21+) | Yes — blocked virtual threads pin their carrier thread | No — no blocking, no pinning | No — virtual threads unmount cleanly when blocked on ReentrantLock |
| Typical use case | General-purpose critical sections protecting compound operations on any JDK | Single-variable counters, accumulators, and atomic flags | Complex locking with timeouts, fairness, or virtual thread compatibility on JDK 21+ |
🎯 Key Takeaways
- synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee — it solves two concurrency problems, not one. Synchronizing only writes but not reads is insufficient.
- The happens-before guarantee is the most overlooked property: without synchronized on reads, one thread's writes may be invisible to another even if the write itself was synchronized.
- Always lock on a private final Object — never 'this' (exposes lock to external code), never a mutable field (reference can change), never String literals or boxed Integers (JVM-wide sharing).
- For simple atomic operations on a single variable, AtomicInteger outperforms synchronized by using hardware CAS instructions with no lock data structure. Use LongAdder for write-heavy accumulators.
- Biased locking was removed in JDK 15 — do not rely on performance benchmarks from JDK 8 or 11. On JDK 21+, synchronized causes virtual thread pinning under contention — prefer ReentrantLock for virtual-thread-aware code.
- The biggest production trap: synchronizing on the wrong object. If threads lock on different monitors, they do not block each other — data corruption occurs silently with no exception or warning.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat is the difference between a synchronized method and a synchronized block?JuniorReveal
- QCan two threads execute different synchronized instance methods on the same object at the same time?JuniorReveal
- QWhat happens when a thread calls
sleep()inside a synchronized block versuswait()?Mid-levelReveal - QWhat does 'intrinsic lock' or 'monitor' mean in the context of Java synchronization?Mid-levelReveal
- QIs the synchronized keyword reentrant? Explain why this is important for recursive calls and inheritance.Mid-levelReveal
- QExplain the performance difference between lightweight locking and heavyweight locking in the JVM, and what changed in JDK 15 and JDK 21.SeniorReveal
Frequently Asked Questions
What happens if a synchronized method throws an exception?
When an uncaught exception propagates out of a synchronized method or block, the JVM automatically releases the intrinsic lock. Other waiting threads are unblocked and can compete for the monitor. However, releasing the lock does not restore the shared state to a consistent condition — if the exception occurred mid-modification, the shared data may be partially updated. Callers who then acquire the lock will see corrupted state. The fix is to design synchronized operations so they either complete fully or leave state unchanged — prepare new state in local variables and commit to shared state only on success.
Are static synchronized methods different from instance synchronized methods?
Yes. A static synchronized method locks on the Class object — MyClass.class — rather than on any particular instance. An instance synchronized method locks on the specific instance (this). This means one thread can execute a static synchronized method and a different thread can execute an instance synchronized method on the same class simultaneously, because they acquire different monitors. It also means all instances of a class share the same class-level lock for static synchronized methods — a class with many instances has a single contentious lock for static operations.
Is synchronized reentrant in Java?
Yes. If a thread already holds a monitor, it can acquire the same monitor again — for example, a synchronized method calling another synchronized method on the same object — without deadlocking. The JVM maintains a hold count per thread per monitor. The first acquisition sets the count to 1, re-entry increments it, and each exit decrements it. The monitor is released only when the count reaches zero. Without reentrancy, any synchronized method that calls another synchronized method on the same object would deadlock.
When should I use synchronized versus AtomicInteger?
Use AtomicInteger for single-variable atomic operations: incrementing a counter, compare-and-setting a flag, or atomically updating one value. These use hardware CAS instructions with no lock overhead and scale better than synchronized under contention. Use synchronized when you need to protect compound operations involving multiple variables — transferring a balance between two accounts requires both accounts to be updated atomically, which a single AtomicInteger cannot provide. AtomicInteger only guarantees atomicity on a single variable — it cannot protect a multi-step sequence.
Does synchronized guarantee fairness?
No. When a synchronized monitor is released, any thread in the entry set may acquire it — there is no FIFO ordering guarantee. A thread that has waited for a long time can be bypassed by a thread that just arrived. This is starvation. If fairness is required, use ReentrantLock with the fair constructor: new ReentrantLock(true). A fair lock uses FIFO ordering for waiting threads. The trade-off: fair locks have higher overhead per acquisition because the scheduler must honour the queue order rather than simply waking the most convenient thread.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.