Home Java Java Synchronization Deep Dive — Locks, Monitors and Memory Visibility

Java Synchronization Deep Dive — Locks, Monitors and Memory Visibility

In Plain English 🔥
Imagine a single bathroom in a busy office. If two people walk in at the same time, chaos happens. So you put a lock on the door — one person goes in, locks it, does their thing, then unlocks it for the next person. Java synchronization is exactly that lock for your data. Without it, multiple threads crash into each other's work and corrupt everything silently.
⚡ Quick Answer
Imagine a single bathroom in a busy office. If two people walk in at the same time, chaos happens. So you put a lock on the door — one person goes in, locks it, does their thing, then unlocks it for the next person. Java synchronization is exactly that lock for your data. Without it, multiple threads crash into each other's work and corrupt everything silently.

Every production Java system eventually faces the same invisible enemy: two threads touching shared data at the exact same moment. The symptoms are maddening — a counter that's off by one, a bank balance that quietly goes negative, a cache that returns stale data for a random 0.1% of requests. These bugs don't crash your app loudly; they corrupt it silently, only surfacing in production under load, impossible to reproduce in your IDE. That's what makes concurrency bugs the most expensive kind.

How the JVM Monitor Actually Works Under the Hood

Every Java object carries an invisible header — 8 or 16 bytes depending on your JVM flags — that contains what's called a mark word. That mark word encodes the object's identity hash code, GC age, and, critically for us, its lock state. When a thread enters a synchronized block, the JVM doesn't immediately go to the OS for a heavyweight mutex. It first tries a biased lock — it literally writes the thread ID into the mark word and assumes ownership. If that same thread comes back, it re-enters for free. Zero CAS operations, zero OS involvement.

If a second thread shows up and contends for the lock, the JVM upgrades to a thin lock using a Compare-And-Swap (CAS) on the mark word. Still no OS involvement — pure user-space spin. Only when contention is high does it escalate to a fat lock (an inflated monitor object backed by a real OS mutex), which is expensive because it can cause a thread context switch.

Understanding this escalation path matters in production. It's why briefly-held locks on uncontended objects are nearly free, but high-contention synchronized blocks can devastate throughput. The JVM can never downgrade from a fat lock back to biased locking on the same object without a Stop-The-World safepoint — a painful detail that affects long-running server applications.

MonitorLockDemo.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

/**
 * Demonstrates the real cost of monitor contention.
 * Run this and compare the timing between the contended
 * and uncontended scenarios to see the fat-lock overhead.
 */
public class MonitorLockDemo {

    // Shared counter — this is our 'bathroom everyone wants to use'
    private int ticketsSold = 0;

    // The synchronized method — one thread at a time enters this 'room'
    public synchronized void sellTicket() {
        // Everything inside here is protected by 'this' object's monitor.
        // The JVM ensures mutual exclusion across all threads calling this method.
        ticketsSold++;
    }

    public int getTicketsSold() {
        return ticketsSold; // Safe to read here; we're done with concurrent writes
    }

    public static void main(String[] args) throws InterruptedException {
        MonitorLockDemo box = new MonitorLockDemo();
        int threadCount = 10;
        int salesPerThread = 100_000;

        ExecutorService pool = Executors.newFixedThreadPool(threadCount);
        // CountDownLatch lets us wait for ALL threads to finish before reading results
        CountDownLatch allDone = new CountDownLatch(threadCount);

        long startTime = System.currentTimeMillis();

        for (int i = 0; i < threadCount; i++) {
            pool.submit(() -> {
                for (int sale = 0; sale < salesPerThread; sale++) {
                    box.sellTicket(); // Each call must acquire and release the monitor
                }
                allDone.countDown(); // Signal: this thread finished its work
            });
        }

        allDone.await(); // Block main thread until all seller threads are done
        long elapsed = System.currentTimeMillis() - startTime;

        System.out.println("Expected tickets sold : " + (threadCount * salesPerThread));
        System.out.println("Actual tickets sold   : " + box.getTicketsSold());
        System.out.println("Time taken            : " + elapsed + "ms");
        // Without synchronization, 'actual' would be LESS than 'expected'
        // because threads would overwrite each other's increments

        pool.shutdown();
    }
}
▶ Output
Expected tickets sold : 1000000
Actual tickets sold : 1000000
Time taken : 312ms
🔥
JVM Internals:You can observe lock inflation in action by running your JVM with -XX:+PrintSafepointStatistics and -XX:+TraceBiasedLocking. In high-contention apps you'll see surprising safepoint pauses caused entirely by biased lock revocation — a real production performance trap.

volatile vs synchronized — They Solve Different Problems

This is the most dangerously misunderstood topic in Java concurrency. Developers often reach for volatile as a 'lightweight synchronized' and ship race conditions to production. Let's be precise about what each one actually guarantees.

volatile gives you two things: visibility and ordering. Every write to a volatile variable is flushed from the thread's CPU cache to main memory immediately, and every read fetches from main memory. It also establishes a happens-before relationship — all writes before the volatile write are visible to any thread that reads the volatile variable. What volatile does NOT give you is atomicity. Reading a long variable on a 32-bit JVM is two separate 32-bit reads. volatile makes both reads visible, but if another thread writes the long between your two reads, you get a torn read. More critically, volatile doesn't protect compound actions like check-then-act (if count == 0 then reset it) — that sequence is still a race condition.

synchronized gives you atomicity, visibility, AND mutual exclusion. Only one thread can execute the synchronized block at a time. The memory semantics are stronger: entering a synchronized block refreshes all variables from main memory; exiting flushes all writes. Use volatile for simple boolean flags and single-variable state changes where atomicity isn't needed. Use synchronized (or AtomicXxx classes) the moment you have a compound action.

VolatileVsSynchronized.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;

/**
 * Proves that volatile does NOT fix compound-action race conditions.
 * The check-then-act pattern is unsafe even with volatile.
 */
public class VolatileVsSynchronized {

    // volatile ensures visibility: every thread sees the latest value.
    // But it does NOT make the increment operation atomic!
    private volatile int volatileCounter = 0;

    // This is the correct tool for atomic compound actions
    private int synchronizedCounter = 0;

    // WRONG: looks safe, isn't. The read and write are two separate operations.
    // Between the read and write, another thread can swoop in and also read the
    // old value, causing both threads to write the same incremented value — a lost update.
    public void unsafeIncrement() {
        volatileCounter++; // This compiles to: READ volatileCounter, ADD 1, WRITE back
                           // Three steps — not atomic even with volatile!
    }

    // RIGHT: synchronized turns the entire read-modify-write into one atomic operation.
    // No other thread can enter while we're inside here.
    public synchronized void safeIncrement() {
        synchronizedCounter++;
    }

    public static void main(String[] args) throws InterruptedException {
        VolatileVsSynchronized demo = new VolatileVsSynchronized();
        ExecutorService pool = Executors.newFixedThreadPool(8);
        int iterations = 50_000;

        for (int i = 0; i < 8; i++) {
            pool.submit(() -> {
                for (int j = 0; j < iterations; j++) {
                    demo.unsafeIncrement();
                    demo.safeIncrement();
                }
            });
        }

        pool.shutdown();
        pool.awaitTermination(30, TimeUnit.SECONDS);

        int expected = 8 * iterations;
        System.out.println("Expected count             : " + expected);
        // volatile counter will almost certainly be LESS than expected
        // because increments were lost in concurrent reads
        System.out.println("volatile counter (UNSAFE)  : " + demo.volatileCounter);
        // synchronized counter will always equal expected
        System.out.println("synchronized counter (SAFE): " + demo.synchronizedCounter);

        boolean volatileFailed = demo.volatileCounter < expected;
        System.out.println("\nvolatile lost updates      : " + volatileFailed);
    }
}
▶ Output
Expected count : 400000
volatile counter (UNSAFE) : 387431
synchronized counter (SAFE): 400000

volatile lost updates : true
⚠️
Watch Out:The Java Memory Model guarantees that reads and writes of long and double are NOT atomic on 32-bit platforms unless marked volatile. This means a 64-bit value can be 'torn' — you read the high 32 bits from one write and the low 32 bits from another. Always use volatile long in shared mutable state, even before reaching for synchronized.

ReentrantLock — When synchronized Isn't Enough

The synchronized keyword is elegant but inflexible. You can't try to acquire a lock without blocking forever. You can't acquire two locks in a way that avoids deadlock. You can't interrupt a thread that's waiting for a lock. java.util.concurrent.locks.ReentrantLock solves all of this.

ReentrantLock is explicit — you call lock() and you must call unlock() yourself, typically in a finally block. It's reentrant just like synchronized, meaning the same thread can acquire it multiple times without deadlocking itself (it keeps a hold count). The critical extras are tryLock() — which returns false immediately if the lock is unavailable instead of blocking — and tryLock(timeout, unit) — which blocks for at most a given duration. lockInterruptibly() lets another thread cancel a waiting thread via Thread.interrupt(), which is impossible with synchronized.

ReentrantLock also supports fairness mode via new ReentrantLock(true). In fair mode, threads acquire the lock in the order they requested it (FIFO queue), preventing thread starvation. The tradeoff is lower throughput — the JVM can't do lock batching or barging optimizations. Use fair mode only when you have a specific correctness requirement around ordering, not as a default.

Condition objects from ReentrantLock replace wait/notify with named, granular signals — one of the most powerful concurrency patterns in Java.

BoundedTicketQueue.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
import java.util.ArrayDeque;
import java.util.Queue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;

/**
 * A bounded queue where producers wait when full and consumers wait when empty.
 * This is the classic producer-consumer problem solved correctly with
 * ReentrantLock and Condition variables — two named signals instead of one
 * generic monitor. This is cleaner and more efficient than wait/notifyAll.
 */
public class BoundedTicketQueue {

    private final Queue<String> tickets = new ArrayDeque<>();
    private final int maxCapacity;
    private final ReentrantLock lock = new ReentrantLock();

    // Two distinct Conditions on the SAME lock — this is the key advantage
    // over synchronized + wait/notify where you only have one monitor signal
    private final Condition notFull  = lock.newCondition(); // producers wait here
    private final Condition notEmpty = lock.newCondition(); // consumers wait here

    public BoundedTicketQueue(int maxCapacity) {
        this.maxCapacity = maxCapacity;
    }

    public void produce(String ticket) throws InterruptedException {
        lock.lock(); // Acquire the lock explicitly — must ALWAYS pair with unlock()
        try {
            // While the queue is full, make the producer wait on the 'notFull' condition.
            // await() atomically releases the lock and suspends the thread.
            while (tickets.size() == maxCapacity) {
                System.out.println(Thread.currentThread().getName()
                        + " waiting — queue full");
                notFull.await();
            }
            tickets.offer(ticket);
            System.out.println(Thread.currentThread().getName()
                    + " produced: " + ticket + " | Queue size: " + tickets.size());
            // Signal ONLY the consumers — not producers.
            // This is impossible with a single notify(); you'd have to use notifyAll()
            // and wake everyone up unnecessarily.
            notEmpty.signal();
        } finally {
            lock.unlock(); // ALWAYS in finally — even if an exception is thrown
        }
    }

    public String consume() throws InterruptedException {
        lock.lock();
        try {
            while (tickets.isEmpty()) {
                System.out.println(Thread.currentThread().getName()
                        + " waiting — queue empty");
                notEmpty.await();
            }
            String ticket = tickets.poll();
            System.out.println(Thread.currentThread().getName()
                    + " consumed: " + ticket + " | Queue size: " + tickets.size());
            notFull.signal(); // Wake up a waiting producer
            return ticket;
        } finally {
            lock.unlock();
        }
    }

    public static void main(String[] args) {
        BoundedTicketQueue queue = new BoundedTicketQueue(3);

        Thread producer = new Thread(() -> {
            String[] events = {"Concert-A", "Concert-B", "Concert-C", "Concert-D", "Concert-E"};
            for (String event : events) {
                try {
                    queue.produce(event);
                    Thread.sleep(50);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        }, "TicketProducer");

        Thread consumer = new Thread(() -> {
            for (int i = 0; i < 5; i++) {
                try {
                    queue.consume();
                    Thread.sleep(150); // Consumer is slower — will cause producer to wait
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        }, "TicketConsumer");

        producer.start();
        consumer.start();
    }
}
▶ Output
TicketProducer produced: Concert-A | Queue size: 1
TicketProducer produced: Concert-B | Queue size: 2
TicketProducer produced: Concert-C | Queue size: 3
TicketProducer waiting — queue full
TicketConsumer consumed: Concert-A | Queue size: 2
TicketProducer produced: Concert-D | Queue size: 3
TicketProducer waiting — queue full
TicketConsumer consumed: Concert-B | Queue size: 2
TicketProducer produced: Concert-E | Queue size: 3
TicketConsumer consumed: Concert-C | Queue size: 2
TicketConsumer consumed: Concert-D | Queue size: 1
TicketConsumer consumed: Concert-E | Queue size: 0
⚠️
Pro Tip:Always use the while loop pattern (while condition: await()) — never if. Spurious wakeups are real: the JVM spec permits a thread to wake from await() without being signalled. Using if instead of while means you proceed on a spurious wakeup and corrupt your invariants. This is one of the most common senior-level concurrency bugs.

Deadlock — How It Happens and How to Prevent It Systematically

Deadlock is when two or more threads each hold a lock the other needs, so they all wait forever. No exception is thrown. No log line appears. The threads just freeze silently. In production this manifests as a hung service that passes health checks (the health check endpoint runs on a different thread) but stops processing work.

Deadlock requires four conditions simultaneously, known as Coffman's conditions: mutual exclusion (locks can only be held by one thread), hold-and-wait (a thread holds one lock while waiting for another), no preemption (you can't take a lock away from a thread), and circular wait (thread A waits for thread B's lock, and thread B waits for thread A's lock). Remove any one condition and deadlock becomes impossible.

The most practical prevention technique is lock ordering: always acquire multiple locks in a globally consistent order across all code paths. If every thread acquires lock A before lock B, circular wait is impossible. ReentrantLock's tryLock(timeout) is a second line of defence — you can back off and retry if you can't get all the locks you need. Java's thread dump (kill -3 on Linux, jstack, or VisualVM) will show DEADLOCK detected and print the exact lock cycle — learn to read them.

DeadlockPrevention.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
import java.util.concurrent.locks.ReentrantLock;

/**
 * Shows the deadlock-prone pattern and then fixes it with lock ordering.
 * Run the 'unsafe' transfer method and you'll see a hang.
 * The 'safe' transfer uses System.identityHashCode to enforce consistent lock order.
 */
public class DeadlockPrevention {

    static class BankAccount {
        private final String owner;
        private double balance;
        private final ReentrantLock lock = new ReentrantLock();

        BankAccount(String owner, double initialBalance) {
            this.owner = owner;
            this.balance = initialBalance;
        }

        // Package-private intentionally — only used inside transfer logic
        ReentrantLock getLock() { return lock; }
        String getOwner()      { return owner; }
        double getBalance()    { return balance; }
        void debit(double amount)  { balance -= amount; }
        void credit(double amount) { balance += amount; }
    }

    /**
     * UNSAFE: Thread 1 locks Alice then waits for Bob.
     *         Thread 2 locks Bob then waits for Alice.
     *         Classic deadlock.
     */
    public static void unsafeTransfer(BankAccount from, BankAccount to, double amount)
            throws InterruptedException {
        from.getLock().lock();
        try {
            Thread.sleep(50); // Simulates real work; makes deadlock near-certain
            to.getLock().lock(); // May block forever if 'to' is locked by another thread
            try {
                from.debit(amount);
                to.credit(amount);
                System.out.println("UNSAFE Transfer: " + from.getOwner()
                        + " -> " + to.getOwner() + " £" + amount);
            } finally {
                to.getLock().unlock();
            }
        } finally {
            from.getLock().unlock();
        }
    }

    /**
     * SAFE: Always acquire locks in identity hash code order.
     * Both threads agree on which lock comes first — circular wait is impossible.
     * System.identityHashCode gives a stable ordering that doesn't depend on
     * business logic — works even with equal objects.
     */
    public static void safeTransfer(BankAccount accountA, BankAccount accountB, double amount) {
        // Determine lock acquisition order by a stable, consistent key
        int hashA = System.identityHashCode(accountA);
        int hashB = System.identityHashCode(accountB);

        ReentrantLock firstLock  = (hashA <= hashB) ? accountA.getLock() : accountB.getLock();
        ReentrantLock secondLock = (hashA <= hashB) ? accountB.getLock() : accountA.getLock();

        firstLock.lock();
        try {
            secondLock.lock();
            try {
                accountA.debit(amount);
                accountB.credit(amount);
                System.out.println("SAFE Transfer: " + accountA.getOwner()
                        + " -> " + accountB.getOwner() + " £" + amount);
            } finally {
                secondLock.unlock();
            }
        } finally {
            firstLock.unlock();
        }
    }

    public static void main(String[] args) {
        BankAccount alice = new BankAccount("Alice", 1000.0);
        BankAccount bob   = new BankAccount("Bob",   1000.0);

        System.out.println("--- Running SAFE transfers (no deadlock) ---");
        // Thread 1: Alice -> Bob
        Thread t1 = new Thread(() -> safeTransfer(alice, bob, 100.0), "TransferThread-1");
        // Thread 2: Bob -> Alice (reverse direction — safe version handles this)
        Thread t2 = new Thread(() -> safeTransfer(bob, alice, 50.0),  "TransferThread-2");

        t1.start();
        t2.start();

        try {
            t1.join();
            t2.join();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }

        System.out.println("Alice final balance: £" + alice.getBalance());
        System.out.println("Bob final balance  : £" + bob.getBalance());
    }
}
▶ Output
--- Running SAFE transfers (no deadlock) ---
SAFE Transfer: Alice -> Bob £100.0
SAFE Transfer: Bob -> Alice £50.0
Alice final balance: £950.0
Bob final balance : £1050.0
⚠️
Production Reality:When you suspect a deadlock in production, run jstack and look for the 'Found one Java-level deadlock' section. It prints the exact threads, the lock each holds, and the lock each is waiting for. Keep jstack (or JFR flight recordings) in your incident runbook — a thread dump taken within the first 60 seconds of a hang contains everything you need to diagnose it.
Feature / AspectsynchronizedReentrantLock
SyntaxKeyword — built into languageExplicit API — lock() / unlock()
Lock acquisition timeoutNot supported — blocks indefinitelytryLock(timeout, unit) — returns false on timeout
Interruptible waitingNot supportedlockInterruptibly() — cancellable
Fairness policyNo guarantee (JVM barges)new ReentrantLock(true) enforces FIFO
Multiple conditionsOne monitor per object (wait/notifyAll)Multiple Condition objects per lock
ReentrancyYes — same thread re-enters safelyYes — hold count tracked explicitly
Forgetting to unlockImpossible — JVM auto-releases on block exitBug risk — must use try/finally
Performance (uncontended)Very fast — biased lockingSlightly slower — object overhead
Performance (high contention)Inflates to fat lock — context switchesMore tunable — tryLock backs off gracefully
ReadabilityConcise — good for simple casesVerbose — necessary for complex lock logic
Java versionAll versionsJava 5+ (java.util.concurrent.locks)

🎯 Key Takeaways

  • volatile guarantees visibility and ordering — it does NOT guarantee atomicity. Any compound action (read-modify-write, check-then-act) still needs synchronized or an AtomicXxx class.
  • The JVM escalates locks through biased → thin (CAS) → fat (OS mutex). Uncontended locks are nearly free; high-contention synchronized blocks cause context switches that can tank throughput by 10-100x.
  • ReentrantLock's tryLock(timeout) is your deadlock escape hatch — it lets threads back off instead of waiting forever, which is impossible with the synchronized keyword.
  • Lock ordering is the most reliable deadlock prevention strategy: define a global consistent order for acquiring multiple locks and enforce it everywhere. Use System.identityHashCode() for a tie-breaking ordering key that works without business logic assumptions.

⚠ Common Mistakes to Avoid

  • Mistake 1: Synchronizing on a non-final field — if the reference your lock object lives in can be reassigned, different threads may synchronize on different objects and get no mutual exclusion at all. Symptom: data corruption despite synchronized keyword. Fix: always declare lock objects as private final Object lockGuard = new Object(); and never use a String literal or Integer as a lock object — they're interned and shared across the entire JVM.
  • Mistake 2: Calling wait() outside a while loop — using if (queue.isEmpty()) wait() instead of while (queue.isEmpty()) wait() leaves you vulnerable to spurious wakeups, which the JVM spec explicitly permits. Symptom: NullPointerException or IndexOutOfBoundsException in code that appears logically guarded, only under load. Fix: always use while for the condition check in any wait/await pattern.
  • Mistake 3: Holding a lock while doing I/O or making a network call — a thread that holds a lock while waiting on a database or HTTP call blocks every other thread that needs that lock, turning a 200ms latency spike into a full application stall. Symptom: cascading thread-pool exhaustion in production under any downstream latency. Fix: fetch the data outside the synchronized block, then enter the synchronized block only to update shared state with the already-retrieved result.

Interview Questions on This Topic

  • QWhat is the difference between synchronized and volatile, and can you give a scenario where using volatile instead of synchronized would introduce a bug?
  • QExplain how the JVM implements locking internally — what are biased locks, thin locks, and fat locks, and what triggers the transitions between them?
  • QIf two threads call synchronized methods on the same object, they contend on the same monitor. But what happens if one thread calls a synchronized method and another calls a non-synchronized method on the same object simultaneously — is there any protection?

Frequently Asked Questions

Does synchronized guarantee visibility as well as mutual exclusion in Java?

Yes — and this is often missed. The Java Memory Model specifies that entering a synchronized block causes a thread to re-read all variables from main memory, and exiting it flushes all writes. So synchronized provides both mutual exclusion and full memory visibility, whereas volatile provides only visibility without mutual exclusion.

When should I use ReentrantLock instead of synchronized?

Use ReentrantLock when you need any of: a timed tryLock to avoid indefinite blocking, the ability to interrupt a waiting thread via lockInterruptibly(), a fairness policy to prevent thread starvation, or multiple distinct Condition objects to signal different waiting thread groups independently. For simple critical sections, synchronized is cleaner and less error-prone.

Can a thread deadlock with itself using synchronized?

No. Java's synchronized keyword is reentrant by design — if a thread already holds a lock and re-enters a synchronized block or method guarded by the same lock, it succeeds immediately. The JVM tracks a hold count and only releases the lock when the hold count reaches zero. This is why synchronized on recursive method calls doesn't deadlock.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousThread Lifecycle in JavaNext →Executors and Thread Pools in Java
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged