Skip to content
Home Java synchronized Keyword in Java: Intrinsic Locks and Thread Safety

synchronized Keyword in Java: Intrinsic Locks and Thread Safety

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Concurrency → Topic 3 of 6
Master the synchronized keyword in Java.
⚙️ Intermediate — basic Java knowledge assumed
In this tutorial, you'll learn
Master the synchronized keyword in Java.
  • synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee — it solves two concurrency problems, not one. Synchronizing only writes but not reads is insufficient.
  • The happens-before guarantee is the most overlooked property: without synchronized on reads, one thread's writes may be invisible to another even if the write itself was synchronized.
  • Always lock on a private final Object — never 'this' (exposes lock to external code), never a mutable field (reference can change), never String literals or boxed Integers (JVM-wide sharing).
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • synchronized is Java's built-in mutex — it ensures only one thread executes a critical section at a time
  • Every Java object has an intrinsic lock (monitor) — a thread must acquire it before entering a synchronized block
  • synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee
  • Performance cost: uncontended locks are cheap (lightweight CAS on JDK 15+), contended locks are expensive (OS-level blocking)
  • Biggest production trap: synchronizing on the wrong object — if threads lock on different monitors, synchronization is silently bypassed with no error or warning
  • For simple atomic operations (counters, flags), AtomicInteger outperforms synchronized by avoiding lock overhead entirely via hardware CAS instructions
🚨 START HERE
Synchronization Debug Cheat Sheet
Quick commands to diagnose synchronization issues in production Java applications
🟡Application appears hung or deadlocked
Immediate ActionCapture thread dumps to identify the deadlock chain before restarting
Commands
jstack <pid> > thread_dump.txt
jcmd <pid> Thread.print > thread_dump2.txt
Fix NowSearch for 'BLOCKED' and 'waiting to lock' in the dump — the JVM marks deadlock cycles explicitly. Identify which thread holds what monitor and draw the wait graph.
🟡High lock contention — threads spending significant time waiting
Immediate ActionRecord a Java Flight Recorder profile focused on lock events — this is the most precise tool available
Commands
jcmd <pid> JFR.start duration=60s filename=lock_profile.jfr settings=profile
jcmd <pid> JFR.dump filename=lock_profile.jfr
Fix NowOpen in JDK Mission Control — navigate to the Lock Instances view and sort by blocked time. The monitor with the highest total blocked duration is your bottleneck.
🟡Need to see which threads hold which locks right now
Immediate ActionGet thread dump with lock information and filter for held monitors
Commands
jstack -l <pid> | grep -A5 'locked'
jcmd <pid> Thread.print -l | grep -B3 'ownable synchronizer'
Fix NowMap each locked monitor address to its owning thread. The thread that holds the most contested monitor is the bottleneck — look at what work it is doing inside the critical section.
Production IncidentPayment service double-charge — unsynchronized counter caused duplicate transaction IDsA fintech payment service used a non-atomic counter to generate transaction IDs. Under peak load, two threads read the same counter value, incremented it, and wrote back the same ID. Two separate payments were assigned the same transaction ID, causing the downstream ledger to reject one and the customer to be double-charged.
SymptomCustomer support tickets spike with double-charge complaints within 20 minutes of a traffic surge. Ledger reconciliation shows duplicate transaction IDs for payments that completed successfully in the application logs. The counter value in the logs skips numbers — 1047 followed by 1049 with no 1048 — which is the telltale sign of lost updates under concurrent access.
AssumptionThe counter is just an int increment — it is fast enough that threads will not collide. The team had tested the service at low concurrency during QA and never observed duplicates.
Root causeThe increment operation count++ is not atomic at the bytecode level. It compiles to three separate instructions: load count from memory, add 1 to the loaded value, store the result back to count. Two threads can interleave between any of these instructions — both reading the same value, both adding 1, both writing back the same result. One increment is silently lost. The counter was a plain int field with no synchronization and no volatile, and no AtomicInteger. The race window is nanoseconds per occurrence, but at 500 requests per second across 40 threads, it occurs multiple times per hour without any logging indication until a duplicate reaches the ledger.
FixImmediate: replaced the plain int counter with AtomicInteger and used incrementAndGet() for all transaction ID generation — one line change, zero lock overhead, correct under arbitrary concurrency. Short-term: added a database UNIQUE constraint on transaction_id so that any duplicate ID hitting the persistence layer is rejected with a constraint violation rather than silently accepted and causing a ledger inconsistency. Long-term: audited all shared mutable state in the payment service for similar unsynchronized access — found three additional instances involving status flags and retry counters that were also unprotected.
Key Lesson
count++ is NOT atomic — it is a read-modify-write operation across three bytecode instructions, and any two threads can interleave between any pair of those instructionsAny shared mutable counter must use synchronized or AtomicInteger — no exceptions, and no relying on the operation being 'fast enough' that collisions seem unlikelyDefence in depth: even with correct synchronisation in the application layer, add a database unique constraint as a safety net for the inevitable case where some code path bypasses the synchronisationRace conditions that seem impossible to reproduce in testing occur multiple times per hour in production under real concurrency — the test environment rarely matches production thread count and request rate simultaneously
Production Debug GuideCommon symptoms when synchronized goes wrong
Throughput drops to near-zero under high concurrency — threads piling upA synchronized block is too coarse — it covers I/O, database calls, or heavy computation that should happen outside the critical section. Profile with jstack to identify threads in BLOCKED state and see which monitor they are waiting on. Reduce the synchronized scope to only the lines that actually touch shared mutable state. Move any I/O or computation outside the lock.
Data corruption despite using synchronized — lost updates, duplicate IDs, inconsistent stateThreads are synchronizing on different objects. This is a silent failure — both paths compile and run without error, but they provide no mutual exclusion. Print System.identityHashCode(lockObject) from each synchronized block to verify they lock on the same instance. Use a single private final Object as the canonical lock for all access to the same shared data.
Application hangs completely — no progress, no errors, no timeoutsLikely deadlock. Take thread dumps immediately: jstack pid or jcmd pid Thread.print. Look for threads in BLOCKED state where thread A is waiting on a monitor held by thread B, and thread B is waiting on a monitor held by thread A. The JVM will identify the deadlock cycle in the thread dump output. Resolve by establishing a consistent lock acquisition order across all code paths.
Latency spikes that correlate with thread count increases — gets worse as traffic growsMonitor contention is growing with thread count, which is the expected failure mode of coarse-grained synchronization. Use JFR (Java Flight Recorder) to measure lock contention time — the Java Monitor Blocked event shows exactly which monitors are hot and which threads are waiting longest. Consider replacing synchronized with ReentrantLock, lock-free data structures, or partitioned data to reduce contention surface area.
Stale data reads despite synchronized writes — reads see old valuesReads are not synchronized. The happens-before guarantee only applies when both the write and the read acquire the same lock — synchronising only the write side is insufficient. Add synchronized to the read path as well, using the same lock object. For simple boolean flags or single-variable visibility without mutual exclusion, volatile is sufficient and cheaper than synchronized.

The synchronized keyword is Java's native mechanism for mutual exclusion. It solves two distinct problems that are easy to conflate but genuinely separate: race conditions (multiple threads corrupting shared state by interleaving their operations) and memory visibility (one thread's writes being invisible to other threads due to CPU caching and compiler reordering).

Every Java object carries an intrinsic lock — also called a monitor. When a thread enters a synchronized method or block, it acquires that monitor. All other threads attempting to acquire the same monitor are blocked until the owner releases it. This mechanism also establishes a happens-before relationship: all writes performed by the releasing thread are guaranteed to be visible to the thread that subsequently acquires the same monitor.

The cost is contention. Under high concurrency, synchronized blocks become bottlenecks because blocked threads consume no CPU but still delay throughput — and every context switch costs time. Understanding when synchronized is the right tool, and when lock-free alternatives like AtomicInteger, LongAdder, or concurrent collections are better choices, is what separates code that passes code review from code that runs well in production under real load.

In 2026, with virtual threads (JEP 444, stable since JDK 21) changing the cost model of blocking operations, understanding the fundamentals of Java's synchronisation model matters even more. Virtual threads pin to carrier threads when blocked on a synchronized monitor, which can exhaust the carrier thread pool under contention. The recommendation for new concurrent code in JDK 21+ is to prefer ReentrantLock over synchronized when you expect contention — but you need to understand synchronized first to know when that trade-off applies.

What Is the synchronized Keyword and Why Does It Exist?

The synchronized keyword is Java's native implementation of an intrinsic lock, also known as a monitor. It was designed to solve two distinct concurrency problems that frequently occur together: race conditions, where multiple threads interleave their operations on shared data and corrupt it, and memory visibility, where writes by one thread are not guaranteed to be seen by other threads without explicit coordination.

Every Java object carries a monitor — a JVM-managed structure that tracks which thread currently owns the lock, which threads are blocked waiting to acquire it, and which threads are waiting inside it via wait(). When a thread enters a synchronized method or block, it acquires that object's monitor. If the monitor is free, acquisition succeeds immediately. If another thread holds it, the current thread transitions to the BLOCKED state and waits until the owner releases it on exit.

The second problem synchronized solves — memory visibility — is the one that trips up developers who think about synchronization only in terms of mutual exclusion. Modern CPUs have per-core caches, and compilers reorder instructions for performance. Without explicit synchronisation, a value written by thread A may remain in thread A's CPU cache and never be flushed to main memory. Thread B reading the same variable may get a stale cached value. synchronized solves this via the happens-before guarantee: everything thread A wrote before releasing the monitor is guaranteed visible to thread B when it acquires the same monitor. This is why synchronising only writes but not reads is always wrong — you need the happens-before relationship on both sides.

In JDK 21 and later, there is an important interaction to be aware of with virtual threads. When a virtual thread blocks on a synchronized monitor, it pins to its carrier thread rather than unmounting and freeing the carrier. Under high contention with many virtual threads, this can exhaust the carrier thread pool. The practical guidance for JDK 21+ is to prefer ReentrantLock over synchronized in performance-sensitive code paths that may be called from virtual threads.

io/thecodeforge/concurrency/ForgeCounter.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
package io.thecodeforge.concurrency;

import java.util.concurrent.atomic.AtomicInteger;

/**
 * Thread-safe counter implementation demonstrating three approaches:
 * synchronized method, synchronized block, and lock-free AtomicInteger.
 *
 * io.thecodeforge: Production counters should use AtomicInteger for
 * single-variable operations. synchronized is shown here for educational
 * contrast — use it when protecting multi-step compound operations.
 */
public class ForgeCounter {

    // ---- Approach 1: plain int — NOT thread-safe ----
    // count++ compiles to three bytecode instructions: load, add, store.
    // Two threads can interleave between any two of them and lose an update.
    private int unsafeCount = 0;

    // ---- Approach 2: synchronized method — thread-safe, coarser granularity ----
    private int syncCount = 0;

    // ---- Approach 3: AtomicInteger — thread-safe, no lock overhead ----
    // Preferred for single-variable counters in production.
    private final AtomicInteger atomicCount = new AtomicInteger(0);

    // ---- Private final lock object — never synchronize on 'this' in production ----
    // Using 'this' exposes your lock to external code. A private final Object
    // is invisible outside this class and its reference never changes.
    private final Object lock = new Object();

    /**
     * Synchronized method — uses the 'this' instance monitor.
     * All synchronized methods on this instance share the same lock.
     * Only one can execute at a time across all threads.
     */
    public synchronized void incrementSync() {
        syncCount++; // now atomic: only one thread enters at a time
    }

    /**
     * Synchronized block on a private lock — preferred over synchronized method.
     * Minimises the locked region: only the write to syncCount is protected.
     * Any non-shared computation happens outside the lock.
     */
    public void incrementWithBlock(String callerInfo) {
        // Non-shared computation outside the lock — no need to hold it here
        String logLine = "Increment requested by: " + callerInfo;

        synchronized (lock) {
            // Only the critical section is locked
            syncCount++;
        }

        // Logging outside the lock — no need to hold the monitor for this
        System.out.println(logLine + " completed.");
    }

    /**
     * Lock-free increment using AtomicInteger.
     * Uses a hardware CAS instruction — no lock, no blocking, scales better.
     * Preferred for simple counter increments in production.
     */
    public int incrementAtomic() {
        return atomicCount.incrementAndGet();
    }

    /**
     * Synchronized read — required if writes are synchronized.
     * Without synchronizing the read, the happens-before guarantee does not apply
     * and the reader may see a stale cached value from its CPU core.
     */
    public synchronized int getSyncCount() {
        return syncCount;
    }

    public int getAtomicCount() {
        return atomicCount.get(); // volatile read — always sees the latest value
    }

    /**
     * Demonstrates reentrancy: a thread holding the monitor can re-acquire it.
     * Without reentrancy, this call chain would deadlock.
     */
    public synchronized void reentrantOuter() {
        System.out.println("Outer: hold count is now 1");
        reentrantInner(); // re-acquires the same monitor — hold count becomes 2
        System.out.println("Outer: hold count is back to 1 after inner returns");
    }

    public synchronized void reentrantInner() {
        System.out.println("Inner: same thread re-acquired the lock without deadlock");
        // hold count returns to 1 when this method exits
    }
}
▶ Output
// Increment requested by: Thread-0 completed.
// Increment requested by: Thread-1 completed.
// Inner: same thread re-acquired the lock without deadlock
// Outer: hold count is back to 1 after inner returns
// Thread-safe: all increments are counted, no lost updates.
Mental Model
The synchronized Mental Model
synchronized is a bathroom key — only one thread holds it at a time, everyone else waits outside, and when you enter you see a fully up-to-date room, not a cached snapshot from before the last occupant.
  • Every Java object has exactly one monitor (intrinsic lock) — it does not matter how many synchronized methods or blocks reference it
  • Entering synchronized means acquiring the monitor — threads that cannot acquire it immediately transition to BLOCKED state and wait
  • Exiting synchronized — whether normally or via an uncaught exception — releases the monitor and wakes one waiting thread
  • The happens-before guarantee means all writes by the releasing thread are guaranteed visible to the next thread that acquires the same monitor
  • Reentrancy: a thread already holding a monitor can re-enter any synchronized block on the same monitor — the JVM tracks a hold count, not a simple owned/free flag
📊 Production Insight
synchronized solves two problems simultaneously: mutual exclusion (only one thread in the critical section) and memory visibility (writes are visible to the next lock holder). Most developers think only about the first.
Without synchronizing reads as well as writes, the happens-before guarantee does not protect you — thread B may read a stale cached value even if thread A's write was synchronized.
In JDK 21+, virtual threads pin to carrier threads when blocking on synchronized monitors — under high contention with many virtual threads, this can exhaust the carrier pool. Prefer ReentrantLock in virtual-thread-heavy code.
Rule: always synchronize both reads and writes on the same monitor — synchronizing only the write side is a common and dangerous mistake.
🎯 Key Takeaway
synchronized provides both mutual exclusion AND memory visibility — it solves two concurrency problems with one mechanism. The happens-before guarantee is the most overlooked half: without it, reads may see stale cached values even if writes were synchronized. Reentrancy means a thread can re-enter a synchronized block on a lock it already holds — critical for any synchronized method that calls another synchronized method on the same object.
When to Use synchronized
IfShared mutable state accessed by multiple threads
UseUse synchronized or a lock-free alternative — the choice depends on whether the operation is a single variable (lock-free) or compound (synchronized)
IfSimple atomic increment, decrement, or compare-and-set on a single variable
UseUse AtomicInteger or AtomicLong — avoids lock overhead entirely via hardware CAS instructions and scales better under contention
IfCompound operations involving multiple variables or check-then-act logic
UseUse synchronized — atomic primitives only protect single-variable operations, not multi-step sequences
IfNeed timeout-based locking, interruptible waiting, or fairness guarantees
UseUse ReentrantLock — synchronized supports none of these, and on JDK 21+ ReentrantLock also avoids virtual thread pinning

Common Mistakes and How to Avoid Them

Even experienced engineers stumble on synchronization nuances. The failures tend to be silent — the code compiles and runs, but provides no actual thread safety.

Over-synchronization is the performance failure. Locking an entire method that performs database queries, HTTP calls, or logging serialises all of that work — threads wait while one thread does work that could have happened concurrently. The fix is to minimise the locked region to only the lines that actually read or write shared mutable state. Everything else — I/O, computation, logging — should happen outside the synchronized block.

Synchronizing on the wrong object is the correctness failure, and it is the most dangerous because it is completely silent. If method A locks on 'this' and method B locks on a private Object, they protect the same shared data but use different monitors. Threads executing A and B concurrently will not block each other. No error is thrown. The data corruption happens as if there were no synchronization at all. The fix is to establish one canonical lock object for each unit of shared state and use it everywhere.

Synchronizing on 'this' exposes your lock to external code. Any caller who holds a reference to your object can synchronize on it for their own unrelated purpose, accidentally contending with your internal operations or causing deadlock. A private final Object lock is invisible outside the class and its reference never changes — these two properties are what make it safe.

Never synchronize on String literals or boxed Integer constants. Java interns String literals and caches small Integer values (-128 to 127) — these are shared across the entire JVM. Code in a completely unrelated class synchronizing on Integer.valueOf(0) is locking on the same object as your code. The resulting deadlocks are nearly impossible to diagnose because the contending code has no apparent relationship.

io/thecodeforge/concurrency/LockingPitfall.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
package io.thecodeforge.concurrency;

import java.util.ArrayList;
import java.util.List;

/**
 * Demonstrates correct and incorrect lock objects.
 * io.thecodeforge: The lock object identity is the entire mechanism.
 * If threads lock on different objects, synchronization is silently bypassed.
 */
public class LockingPitfall {

    // ---- CORRECT: private final Object as the lock ----
    // Private: external code cannot synchronize on it.
    // Final: the reference never changes — threads always lock on the same object.
    private final Object forgeLock = new Object();
    private final List<String> sharedList = new ArrayList<>();

    /**
     * CORRECT: both methods use forgeLock — mutual exclusion is guaranteed.
     */
    public void addDataCorrect(String data) {
        synchronized (forgeLock) {
            sharedList.add(data);
        }
    }

    public int getSizeCorrect() {
        synchronized (forgeLock) {
            return sharedList.size(); // must synchronize reads too
        }
    }

    /**
     * WRONG #1: mixing 'this' and forgeLock for the same shared data.
     * addDataCorrect() and this method do NOT block each other.
     * Both can run concurrently, corrupting sharedList.
     */
    public void addDataWrongLock(String data) {
        synchronized (this) {         // different monitor from forgeLock
            sharedList.add(data);     // no mutual exclusion with addDataCorrect
        }
    }

    /**
     * WRONG #2: synchronizing on a mutable field.
     * If this.listLock is reassigned between method calls, threads lock
     * on different objects. The reassignment can happen at any time.
     */
    private Object mutableLock = new Object(); // NOT final — dangerous

    public void addDataMutableLock(String data) {
        synchronized (mutableLock) {   // mutableLock may have been reassigned
            sharedList.add(data);      // no longer safe
        }
    }

    public void reassignLock() {
        mutableLock = new Object();    // now threads will lock on different objects
    }

    /**
     * WRONG #3: synchronizing on a String literal.
     * Java interns String literals — "forge-lock" is the same object everywhere
     * in the JVM. Any other class using synchronized("forge-lock") contends
     * with this method, even if it has nothing to do with this class.
     */
    public void addDataStringLock(String data) {
        synchronized ("forge-lock") { // NEVER do this
            sharedList.add(data);
        }
    }

    /**
     * Demonstrates the lock identity diagnostic.
     * Use System.identityHashCode to verify all synchronized blocks
     * reference the same object instance.
     */
    public void diagnoseLockIdentity() {
        System.out.println("forgeLock identity: " + System.identityHashCode(forgeLock));
        System.out.println("this identity:      " + System.identityHashCode(this));
        // If these are different, mixed use of forgeLock and 'this' is a bug.
    }
}
▶ Output
// forgeLock identity: 1829164700
// this identity: 2018699554
// Different identity hashes confirm that forgeLock and 'this' are separate monitors.
// Any code path using synchronized(this) does NOT block code using synchronized(forgeLock).
⚠ Lock Identity Is the Entire Mechanism — Different Objects Mean No Synchronization
The most common production mistake with synchronized is locking on different objects for the same shared data. If method A synchronizes on 'this' and method B synchronizes on a private lock object, they protect the same data but use different monitors — threads executing A and B concurrently will not block each other. No exception is thrown. The data corruption happens silently. Use System.identityHashCode(lockObject) in a diagnostic log to verify that all synchronized blocks protecting the same data are locking on the same instance. Always use a single private final Object as the canonical lock for each unit of shared state.
📊 Production Insight
Synchronizing on 'this' exposes your lock to external code — any caller with a reference to your instance can synchronize on it for an unrelated purpose, causing accidental contention or deadlock.
Synchronizing on a mutable field means the reference can be replaced between calls — subsequent acquires lock on a different object, silently bypassing mutual exclusion.
Synchronizing on String literals or boxed Integers shares your lock with the entire JVM — Integer.valueOf(42) is the same cached object in your code, in a library, and in a framework.
Rule: one private final Object lock per logical unit of shared state — never 'this', never mutable, never interned.
🎯 Key Takeaway
Lock on a private final Object — never 'this', never a mutable field, never Strings, never boxed Integers. The identity of the lock object is the entire mechanism — if two synchronized blocks reference different objects, they provide zero mutual exclusion for each other. Use System.identityHashCode(lock) to verify lock identity when debugging unexplained data corruption despite synchronization.
Choosing the Right Lock Object
IfProtecting instance-level state from concurrent access
UseUse private final Object lock = new Object() — never synchronize on 'this', which is visible to external code
IfProtecting static (class-level) shared state
UseSynchronize on a private static final Object or on MyClass.class — class-level lock, not instance-level
IfTempted to synchronize on a String literal or a boxed Integer constant
UseNever do this — Java interns Strings and caches small Integers, sharing these instances across the entire JVM with all unrelated code
IfMultiple methods need to protect the same shared data
UseUse the same private final Object in every synchronized block that touches that data — one lock per logical data domain

JVM Lock Implementation: Lightweight and Heavyweight Locking

The JVM does not use a single locking strategy. It implements an escalation model that minimises the cost of synchronization when contention is low or absent, and escalates to OS-level primitives only when necessary. Understanding this model explains why synchronized is cheaper than most developers assume — until contention appears.

On JDK 15 and earlier, the JVM implemented three tiers: biased locking (zero-cost for single-thread access), lightweight locking (CAS-based for low contention), and heavyweight locking (OS mutex for high contention). Biased locking was removed in JDK 15 via JEP 374 because its revocation mechanism was a source of JVM pauses and complexity that was rarely justified in modern concurrent applications. As of JDK 21, the deprecation of biased locking is complete and the JVM starts with lightweight locking from the first acquisition.

Lightweight locking activates when a thread acquires an uncontended monitor. The JVM stores the current thread's identity in the object's Mark Word using a Compare-And-Swap (CAS) instruction — a single atomic CPU operation that either succeeds or fails without any kernel involvement. If the CAS succeeds, the lock is acquired. If another thread attempts to acquire the lock while it is held, the JVM may spin briefly (adaptive spinning) — this is a bet that the current owner will release soon and the cost of a context switch is not worth paying.

Heavyweight locking is the fallback when adaptive spinning does not resolve contention. The JVM inflates the lock to a full OS-level mutex. The contending thread is suspended via the OS scheduler and added to a wait queue. When the owner releases the lock, the OS is asked to wake a waiting thread. Context switches cost 1–10 microseconds each, and this overhead compounds under many contending threads — each additional thread adds both its own blocking time and CPU cache pressure from the scheduling activity.

For JDK 21 virtual threads specifically: when a virtual thread blocks on a synchronized monitor during lightweight locking, it pins to its carrier thread. The carrier thread is blocked for the duration. If many virtual threads pin simultaneously, the ForkJoinPool carrier thread pool becomes saturated, which is functionally equivalent to running out of threads in a traditional thread pool. This is the concrete reason the JDK 21 documentation recommends ReentrantLock over synchronized for code that runs on virtual threads and expects any meaningful contention.

io/thecodeforge/concurrency/LockEscalationDemo.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100
package io.thecodeforge.concurrency;

import java.util.concurrent.locks.ReentrantLock;

/**
 * Demonstrates JVM lock escalation and the practical performance difference
 * between synchronized (blocking) and ReentrantLock (interruptible, timeout).
 *
 * io.thecodeforge: Use JFR to observe lock contention in production.
 * Command: jcmd <pid> JFR.start duration=60s filename=locks.jfr settings=profile
 */
public class LockEscalationDemo {

    private final Object monitor = new Object();
    private final ReentrantLock reentrantLock = new ReentrantLock();
    private int sharedCounter = 0;

    /**
     * Uncontended access: lightweight locking (CAS) — fast.
     * A single thread repeatedly acquiring and releasing the same lock
     * will use only CAS operations — no OS involvement.
     */
    public void singleThreadAccess() {
        for (int i = 0; i < 1_000_000; i++) {
            synchronized (monitor) {
                sharedCounter++;
            }
        }
        System.out.println("Single-thread result: " + sharedCounter);
    }

    /**
     * Contended access: JVM escalates to heavyweight locking under pressure.
     * 16 threads competing for the same monitor — context switches dominate.
     * Compare throughput to the lock-free version below.
     */
    public void heavilyContendedSync() throws InterruptedException {
        sharedCounter = 0;
        Thread[] threads = new Thread[16];
        for (int t = 0; t < threads.length; t++) {
            threads[t] = new Thread(() -> {
                for (int i = 0; i < 100_000; i++) {
                    synchronized (monitor) {
                        sharedCounter++;
                    }
                }
            }, "ForgeWorker-" + t);
        }
        long start = System.nanoTime();
        for (Thread thread : threads) thread.start();
        for (Thread thread : threads) thread.join();
        long elapsed = System.nanoTime() - start;
        System.out.printf("synchronized (16 threads): result=%d, elapsed=%dms%n",
                sharedCounter, elapsed / 1_000_000);
    }

    /**
     * ReentrantLock with tryLock — avoids blocking indefinitely.
     * If the lock is unavailable after 100ms, the thread records a miss
     * and moves on rather than blocking the caller thread.
     *
     * This is the pattern to use when synchronized's lack of timeout support
     * would cause unacceptable latency spikes under contention.
     */
    public void tryLockExample() throws InterruptedException {
        Thread[] threads = new Thread[4];
        for (int t = 0; t < threads.length; t++) {
            final int id = t;
            threads[t] = new Thread(() -> {
                try {
                    if (reentrantLock.tryLock(100, java.util.concurrent.TimeUnit.MILLISECONDS)) {
                        try {
                            sharedCounter++;
                            System.out.println("Thread-" + id + " acquired lock and incremented");
                        } finally {
                            reentrantLock.unlock(); // always unlock in finally
                        }
                    } else {
                        System.out.println("Thread-" + id + " could not acquire lock in 100ms — skipping");
                    }
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }, "ForgeTryLock-" + id);
        }
        for (Thread thread : threads) thread.start();
        for (Thread thread : threads) thread.join();
    }

    /**
     * Diagnosing lock contention at runtime.
     * Use this pattern in a health-check or metrics endpoint.
     */
    public void reportLockContention() {
        System.out.println("ReentrantLock queue length: " + reentrantLock.getQueueLength());
        System.out.println("ReentrantLock hold count:   " + reentrantLock.getHoldCount());
        System.out.println("Is lock held by any thread: " + reentrantLock.isLocked());
        // synchronized monitors cannot be inspected this way — another advantage of ReentrantLock
    }
}
▶ Output
Single-thread result: 1000000
synchronized (16 threads): result=1600000, elapsed=312ms
Thread-0 acquired lock and incremented
Thread-1 acquired lock and incremented
Thread-2 could not acquire lock in 100ms — skipping
Thread-3 acquired lock and incremented
ReentrantLock queue length: 0
ReentrantLock hold count: 0
Is lock held by any thread: false
🔥JVM Lock Escalation — What Changed in JDK 15 and JDK 21
Before JDK 15, the JVM had three lock tiers: biased (zero-cost for single-thread access), lightweight (CAS for low contention), and heavyweight (OS mutex for high contention). Biased locking was removed in JDK 15 (JEP 374) because its revocation mechanism caused JVM pauses and added complexity without proportional benefit in modern JVMs. On JDK 15+, the first lock acquisition uses lightweight locking (CAS) — still fast, but not zero-cost. On JDK 21+, virtual threads add a new consideration: a virtual thread blocked on a synchronized monitor pins its carrier thread, potentially exhausting the ForkJoinPool. For virtual-thread-heavy code, prefer ReentrantLock over synchronized.
📊 Production Insight
Biased locking was removed in JDK 15 — do not rely on performance benchmarks from JDK 8 or 11 that measured biased lock overhead. Re-benchmark on your target JDK.
On JDK 21+, synchronized causes virtual thread pinning under contention — this is a scalability problem specific to the virtual thread model, not a correctness problem.
Adaptive spinning is the JVM's first defence against heavyweight escalation — it helps for very short critical sections but adds CPU burn if the owner holds the lock for longer than a few microseconds.
Rule: profile first with JFR lock events before deciding synchronised is a bottleneck — the overhead is often not where you expect it.
🎯 Key Takeaway
The JVM implements lock escalation from lightweight CAS to heavyweight OS mutex based on contention — this is invisible in code but determines whether your synchronization costs nanoseconds or microseconds per acquisition. On JDK 15+, biased locking is gone — re-benchmark if your performance assumptions came from older JDK measurements. On JDK 21+, synchronized causes virtual thread pinning under contention — prefer ReentrantLock for code that runs on virtual threads.
Lock Strategy by Contention Level and JDK Version
IfSingle thread or effectively uncontended access on any JDK
Usesynchronized with lightweight CAS — near-zero overhead, correct, and the simplest code
IfLow contention (2-4 threads, short critical sections under 1ms)
Usesynchronized is acceptable — CAS contention is manageable and code complexity stays low
IfHigh contention (16+ threads, or critical sections with I/O) on traditional threads
UseReplace synchronized with ReentrantLock and tryLock(timeout) — reduces blocking time and provides contention metrics via getQueueLength()
IfAny contention on code called from virtual threads (JDK 21+)
UseUse ReentrantLock — synchronized pins virtual threads to carrier threads and can exhaust the ForkJoinPool under load
🗂 Synchronized vs Lock-Free Alternatives
Choosing the right concurrency primitive for your use case
AspectsynchronizedAtomicInteger / LongAdderReentrantLock
Mutual exclusionYes — one thread in the critical section at a timeNo — single-variable atomicity only, not multi-step operationsYes — same mutual exclusion guarantee as synchronized
Memory visibilityYes — full happens-before guarantee on acquire and releaseYes — volatile semantics on every read and writeYes — full happens-before guarantee on lock and unlock
Performance (uncontended)Fast — lightweight CAS on JDK 15+ (not zero-cost but close)Fastest — single CAS instruction, no lock data structureSimilar to synchronized — slightly more overhead from the AQS framework
Performance (high contention)Degrades — heavyweight OS blocking, context switches compoundScales well — LongAdder stripes across CPU cores to reduce CAS contentionSimilar degradation to synchronized, but tryLock avoids indefinite blocking
Timeout supportNo — blocked threads wait indefinitelyNot applicable — no blockingYes — tryLock(long, TimeUnit) returns false instead of blocking
InterruptibleNo — BLOCKED threads cannot be interruptedNot applicableYes — lockInterruptibly() responds to Thread.interrupt()
Fairness policyNo guarantee — any waiting thread may acquire after releaseNot applicableConfigurable — new ReentrantLock(true) enforces FIFO ordering (with overhead)
Condition variablesBuilt-in — wait(), notify(), notifyAll() on any objectNot applicableExplicit — lock.newCondition() for multiple independent wait sets
Contention visibilityNone — no API to inspect waiting threads or hold countNot applicableFull — getQueueLength(), getHoldCount(), isLocked() for monitoring and alerting
Virtual thread pinning (JDK 21+)Yes — blocked virtual threads pin their carrier threadNo — no blocking, no pinningNo — virtual threads unmount cleanly when blocked on ReentrantLock
Typical use caseGeneral-purpose critical sections protecting compound operations on any JDKSingle-variable counters, accumulators, and atomic flagsComplex locking with timeouts, fairness, or virtual thread compatibility on JDK 21+

🎯 Key Takeaways

  • synchronized provides both mutual exclusion AND memory visibility via the happens-before guarantee — it solves two concurrency problems, not one. Synchronizing only writes but not reads is insufficient.
  • The happens-before guarantee is the most overlooked property: without synchronized on reads, one thread's writes may be invisible to another even if the write itself was synchronized.
  • Always lock on a private final Object — never 'this' (exposes lock to external code), never a mutable field (reference can change), never String literals or boxed Integers (JVM-wide sharing).
  • For simple atomic operations on a single variable, AtomicInteger outperforms synchronized by using hardware CAS instructions with no lock data structure. Use LongAdder for write-heavy accumulators.
  • Biased locking was removed in JDK 15 — do not rely on performance benchmarks from JDK 8 or 11. On JDK 21+, synchronized causes virtual thread pinning under contention — prefer ReentrantLock for virtual-thread-aware code.
  • The biggest production trap: synchronizing on the wrong object. If threads lock on different monitors, they do not block each other — data corruption occurs silently with no exception or warning.

⚠ Common Mistakes to Avoid

    Over-synchronizing — locking the entire method when only two lines need protection
    Symptom

    Throughput drops proportionally as thread count increases — adding threads makes things worse rather than better. Threads spend 90%+ of their time in BLOCKED state waiting for a lock that is held while doing I/O, logging, or long-running computation. API p99 latency spikes under moderate load.

    Fix

    Reduce the synchronized scope to only the critical section — the lines that read or write shared mutable state. Move I/O, computation, and logging outside the synchronized block. Use a synchronized block rather than a synchronized method so you can control the exact scope. The goal is to hold the lock for the shortest time the correctness of the operation allows.

    Synchronizing on different objects for the same shared data
    Symptom

    Data corruption persists despite synchronized blocks. Lost updates, duplicate IDs, or inconsistent state between related fields. Thread dumps show threads acquiring monitors at the same time that should be mutually exclusive — they are not blocking each other because they are locking on different instances.

    Fix

    Use a single private final Object as the canonical lock for all access to the same shared data. Audit every code path that reads or writes the shared state — they must all synchronize on the same lock instance. Use System.identityHashCode(lock) as a diagnostic to verify lock identity during debugging.

    Synchronizing on 'this' and exposing the lock to external code
    Symptom

    Deadlock occurs when external code synchronizes on your object instance for an unrelated reason — two subsystems that have nothing to do with each other accidentally contend on the same monitor because both use synchronized(sameInstance). The deadlock is intermittent and nearly impossible to reproduce under controlled conditions.

    Fix

    Replace synchronized(this) and synchronized methods with synchronized blocks on a private final Object. The lock becomes invisible to external code — they cannot reference it, and therefore cannot accidentally contend on it or cause deadlock through unsolicited synchronization.

    Synchronizing on String literals or boxed Integer constants
    Symptom

    Unrelated parts of the application block each other with no apparent connection. Thread dumps show threads waiting on a monitor whose owner is doing something completely unrelated to the waiting thread's operation. Intermittent deadlocks that are impossible to reproduce in isolation because the bug requires both code paths to be active simultaneously.

    Fix

    Never synchronize on interned String objects or cached Integer values (-128 to 127). Java's JVM shares these instances across all code — your lock is shared with every other class and framework in the process that happens to use the same literal. Always use a dedicated private final Object created specifically for your locking purpose.

    Using synchronized when AtomicInteger or LongAdder would suffice
    Symptom

    Unnecessary lock contention on a simple counter or flag. Under high thread counts, the synchronized block becomes a bottleneck even though the protected operation is a single increment or compare-and-set that a hardware CAS instruction handles in nanoseconds without any lock.

    Fix

    Replace synchronized int counters with AtomicInteger.incrementAndGet(). Replace synchronized boolean flags with AtomicBoolean.compareAndSet(). Replace synchronized accumulators that are written from many threads with LongAdder — it stripes across CPU cells to reduce CAS contention, then aggregates on read. LongAdder outperforms AtomicLong significantly under write-heavy workloads.

    Not handling exceptions inside synchronized blocks — leaving shared state partially modified
    Symptom

    Shared state is left in an inconsistent or half-modified condition after an exception. The lock is released automatically, so other threads can enter and observe corrupted state. Subsequent operations on the corrupted state cause cascading failures that are difficult to trace back to the original exception.

    Fix

    Design operations inside synchronized blocks to either complete fully or leave state unchanged — use local variables to prepare the new state and assign to the shared variable only on success. For complex multi-step operations, consider rollback logic in the catch block. Where immutability is feasible, prefer immutable data structures — partial modification becomes structurally impossible.

Interview Questions on This Topic

  • QWhat is the difference between a synchronized method and a synchronized block?JuniorReveal
    A synchronized method locks on the entire method body using 'this' as the monitor for instance methods, or the Class object for static methods. A synchronized block locks on a specific object chosen by the developer and only covers the code inside the block. Synchronized blocks are preferred in production code for two reasons: they minimise lock hold time (you lock only the critical section, not the entire method including I/O or computation), and they allow you to choose which object to lock on — typically a private final Object rather than 'this'. Using 'this' in a synchronized method exposes your lock to external code, which can synchronize on your instance for unrelated reasons and cause contention or deadlock.
  • QCan two threads execute different synchronized instance methods on the same object at the same time?JuniorReveal
    No. All synchronized instance methods on the same object acquire the same monitor — the 'this' instance's intrinsic lock. When thread A enters synchronized method foo() on an object, it acquires that object's monitor. Thread B attempting to enter synchronized method bar() on the same object will be placed in BLOCKED state until thread A exits foo() and releases the monitor. This is the fundamental property of intrinsic locks: one monitor, one owner at a time, regardless of which synchronized method is being invoked. This is also why over-synchronizing entire methods is a performance problem — all synchronized methods serialise against each other even when they protect unrelated state.
  • QWhat happens when a thread calls sleep() inside a synchronized block versus wait()?Mid-levelReveal
    sleep() does NOT release the monitor. The thread holds the intrinsic lock while sleeping, blocking every other thread from entering any synchronized block on the same monitor for the entire sleep duration. This is almost always a bug — it turns a brief critical section into a multi-second bottleneck. wait() DOES release the monitor. The thread atomically releases the lock and adds itself to the object's wait set. Other threads can then acquire the monitor and do their work. When notify() or notifyAll() is called, the waiting thread moves from the wait set to the entry set, waits to re-acquire the monitor, and then continues execution after the wait() call. The key distinction: Thread.sleep() is a Thread-level operation that is completely unaware of monitors. Object.wait() is a monitor-level operation that explicitly coordinates with the lock mechanism.
  • QWhat does 'intrinsic lock' or 'monitor' mean in the context of Java synchronization?Mid-levelReveal
    Every Java object has an associated intrinsic lock — also called a monitor. It is a data structure maintained by the JVM that tracks three things: which thread currently owns the lock, a wait set of threads that called wait() and are suspended until notified, and an entry set of threads that are BLOCKED waiting to acquire the lock. When you write synchronized(obj), the JVM attempts to associate the current thread with obj's monitor. If the monitor is free, the association succeeds and the thread enters the critical section. If another thread owns it, the current thread is placed in the entry set and suspended. When the owner exits the synchronized block, it releases the monitor and the JVM selects a thread from the entry set to compete for ownership. This entire mechanism is what makes mutual exclusion possible at the JVM level.
  • QIs the synchronized keyword reentrant? Explain why this is important for recursive calls and inheritance.Mid-levelReveal
    Yes, synchronized is reentrant. If a thread already holds a monitor, it can re-enter any synchronized block on the same monitor without blocking. The JVM maintains a hold count per thread per monitor: the first acquisition sets the count to 1, each re-entry increments it, and each exit decrements it. The monitor is fully released only when the count reaches zero. Reentrancy is critical in two scenarios. First, recursive synchronized methods would deadlock without it — the method would attempt to acquire a monitor it already holds and wait for itself forever. Second, inheritance — if a subclass overrides a synchronized method and calls super.method(), the same thread re-enters the same monitor. Without reentrancy, the super call would deadlock. ReentrantLock has the same reentrancy property by design, but adds the ability to introspect the hold count via getHoldCount().
  • QExplain the performance difference between lightweight locking and heavyweight locking in the JVM, and what changed in JDK 15 and JDK 21.SeniorReveal
    The JVM implements lock escalation to minimise the cost of synchronization under low or no contention. Lightweight locking uses a Compare-And-Swap (CAS) operation on the object's Mark Word — a single atomic CPU instruction that requires no OS involvement. If the CAS succeeds, the lock is acquired in nanoseconds. If it fails because another thread holds the lock, the JVM may adaptive-spin briefly before escalating. Heavyweight locking inflates the lock to a full OS-level mutex — the contending thread is suspended by the OS scheduler and added to a kernel-managed wait queue. Resuming it requires a context switch costing 1–10 microseconds. Under many contending threads, context switches dominate and throughput collapses. In JDK 15, biased locking was removed via JEP 374 — it was a zero-cost optimization for single-thread access but its revocation mechanism caused JVM pauses and complexity. All lock acquisitions now start with lightweight CAS. In JDK 21, virtual threads introduced a new consideration: a virtual thread blocked on a synchronized monitor pins to its carrier thread for the blocking duration, potentially exhausting the ForkJoinPool carrier pool under contention. For virtual-thread-heavy code, ReentrantLock is now recommended over synchronized when any contention is expected.

Frequently Asked Questions

What happens if a synchronized method throws an exception?

When an uncaught exception propagates out of a synchronized method or block, the JVM automatically releases the intrinsic lock. Other waiting threads are unblocked and can compete for the monitor. However, releasing the lock does not restore the shared state to a consistent condition — if the exception occurred mid-modification, the shared data may be partially updated. Callers who then acquire the lock will see corrupted state. The fix is to design synchronized operations so they either complete fully or leave state unchanged — prepare new state in local variables and commit to shared state only on success.

Are static synchronized methods different from instance synchronized methods?

Yes. A static synchronized method locks on the Class object — MyClass.class — rather than on any particular instance. An instance synchronized method locks on the specific instance (this). This means one thread can execute a static synchronized method and a different thread can execute an instance synchronized method on the same class simultaneously, because they acquire different monitors. It also means all instances of a class share the same class-level lock for static synchronized methods — a class with many instances has a single contentious lock for static operations.

Is synchronized reentrant in Java?

Yes. If a thread already holds a monitor, it can acquire the same monitor again — for example, a synchronized method calling another synchronized method on the same object — without deadlocking. The JVM maintains a hold count per thread per monitor. The first acquisition sets the count to 1, re-entry increments it, and each exit decrements it. The monitor is released only when the count reaches zero. Without reentrancy, any synchronized method that calls another synchronized method on the same object would deadlock.

When should I use synchronized versus AtomicInteger?

Use AtomicInteger for single-variable atomic operations: incrementing a counter, compare-and-setting a flag, or atomically updating one value. These use hardware CAS instructions with no lock overhead and scale better than synchronized under contention. Use synchronized when you need to protect compound operations involving multiple variables — transferring a balance between two accounts requires both accounts to be updated atomically, which a single AtomicInteger cannot provide. AtomicInteger only guarantees atomicity on a single variable — it cannot protect a multi-step sequence.

Does synchronized guarantee fairness?

No. When a synchronized monitor is released, any thread in the entry set may acquire it — there is no FIFO ordering guarantee. A thread that has waited for a long time can be bypassed by a thread that just arrived. This is starvation. If fairness is required, use ReentrantLock with the fair constructor: new ReentrantLock(true). A fair lock uses FIFO ordering for waiting threads. The trade-off: fair locks have higher overhead per acquisition because the scheduler must honour the queue order rather than simply waking the most convenient thread.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousJava Executor Service and Thread PoolsNext →Java Locks and ReentrantLock
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged