Home Database Database Locking Mechanisms Explained — Internals, Deadlocks & Production Gotchas

Database Locking Mechanisms Explained — Internals, Deadlocks & Production Gotchas

In Plain English 🔥
Imagine a library with one copy of a popular book. If you're reading it, others can read over your shoulder (shared), but nobody can take it away to write notes in it. If you need to write notes, you grab the book exclusively — nobody else can even look until you're done. A database lock works exactly the same way: it controls who can read or change a piece of data at any given moment, so two people don't accidentally overwrite each other's work.
⚡ Quick Answer
Imagine a library with one copy of a popular book. If you're reading it, others can read over your shoulder (shared), but nobody can take it away to write notes in it. If you need to write notes, you grab the book exclusively — nobody else can even look until you're done. A database lock works exactly the same way: it controls who can read or change a piece of data at any given moment, so two people don't accidentally overwrite each other's work.

Every production database under real traffic is a battlefield of competing transactions — hundreds of reads and writes colliding every second. Without a disciplined locking strategy, two transactions can read the same account balance, both decide to deduct money, and you end up with funds that vanished into thin air. This is not a hypothetical. It's the kind of bug that costs companies millions and surfaces only under high concurrency — exactly the conditions your monitoring never catches in staging.

Database locking mechanisms are the referee in that battlefield. They define the rules for who gets to read data, who gets to modify it, in what order, and what happens when two transactions want mutually incompatible things at the same time. Understanding them at the internals level — not just 'shared lock good, exclusive lock bad' — is what separates engineers who write correct concurrent code from those who ship subtle data-corruption bugs disguised as race conditions.

By the end of this article you'll understand exactly how shared, exclusive, intent, and row-level locks work internally, how MVCC sidesteps many locking problems, how to read a deadlock graph and resolve it, and the specific isolation level choices that make or break performance in high-throughput systems. You'll also see runnable examples that demonstrate real lock contention and how to diagnose it live in PostgreSQL and MySQL.

Lock Types Demystified — Shared, Exclusive, and the Intent Hierarchy

At the lowest level, every database lock falls into one of two fundamental modes: Shared (S) and Exclusive (X). A Shared lock says 'I'm reading this — others can read too, but nobody rewrites it while I'm looking.' An Exclusive lock says 'I'm writing this — nobody reads or writes until I'm done.' These two modes form the compatibility matrix that every lock manager checks before granting a request.

But modern databases don't just lock individual rows — they lock at multiple granularities: table, page, and row. To avoid scanning every row lock to decide whether a table lock is safe, databases use Intent locks. An Intent Shared (IS) lock on a table means 'some rows in here have Shared locks.' An Intent Exclusive (IX) lock means 'some rows in here have Exclusive locks.' A Shared Intent Exclusive (SIX) lock is the brutal one: the whole table is being read, plus some rows are being modified.

This hierarchy is why a full-table ALTER TABLE can block every single concurrent INSERT — it needs an AccessExclusive lock that's incompatible with every other mode, including IS. Knowing this stops you from running schema migrations on a busy table without precautions like lock_timeout and retries.

lock_compatibility_demo.sql · SQL
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
-- ============================================================
-- DEMO: Observe lock acquisition and compatibility in PostgreSQL
-- Run each block in a separate psql session as indicated
-- ============================================================

-- Setup: create a simple accounts table
CREATE TABLE accounts (
    account_id   SERIAL PRIMARY KEY,
    holder_name  TEXT        NOT NULL,
    balance      NUMERIC(12,2) NOT NULL CHECK (balance >= 0)
);

INSERT INTO accounts (holder_name, balance) VALUES
    ('Alice Nguyen',  5000.00),
    ('Bob Tremblay',  3200.00),
    ('Carol Okafor', 12000.00);

-- ============================================================
-- SESSION 1: Acquire a row-level Exclusive lock via FOR UPDATE
-- ============================================================
BEGIN;

-- FOR UPDATE places a RowExclusiveLock on the table and an
-- ExclusiveLock on the matching row's heap tuple.
-- No other transaction can UPDATE or DELETE this row until we COMMIT.
SELECT account_id, holder_name, balance
FROM   accounts
WHERE  account_id = 1
FOR UPDATE;  -- <-- this is the exclusive row lock

-- Simulate a slow business operation (do NOT commit yet)
-- Leave this transaction open and switch to Session 2

-- ============================================================
-- SESSION 2 (run while Session 1 is still open): Try to read
-- ============================================================

-- Plain SELECT — succeeds immediately because PostgreSQL MVCC
-- serves this from a consistent snapshot, no lock needed
SELECT account_id, holder_name, balance
FROM   accounts
WHERE  account_id = 1;  -- returns immediately with old value

-- FOR UPDATE on the same row — this BLOCKS because Session 1
-- already holds the exclusive row lock
SELECT account_id, holder_name, balance
FROM   accounts
WHERE  account_id = 1
FOR UPDATE;  -- <-- will hang until Session 1 commits or rolls back

-- ============================================================
-- SESSION 3: Inspect active locks while Session 2 is blocked
-- ============================================================
SELECT
    pid,                          -- process ID of the backend
    relation::regclass AS locked_object,
    mode,                         -- lock mode requested or held
    granted,                      -- FALSE means it's waiting
    wait_event_type,
    wait_event,
    query
FROM  pg_locks
JOIN  pg_stat_activity USING (pid)
WHERE relation = 'accounts'::regclass
ORDER BY granted DESC, pid;

-- ============================================================
-- SESSION 1: Now commit to release the lock
-- ============================================================
UPDATE accounts
SET    balance = balance - 200.00
WHERE  account_id = 1;  -- deduct from Alice's account

COMMIT;  -- Session 2's blocked FOR UPDATE will now proceed
▶ Output
-- Session 2 plain SELECT (returns immediately):
account_id | holder_name | balance
-----------+--------------+---------
1 | Alice Nguyen | 5000.00

-- Session 3 pg_locks query (Session 2 is blocked):
pid | locked_object | mode | granted | wait_event_type | wait_event | query
-------+---------------+---------------------+---------+-----------------+------------+----------------------------------
18432 | accounts | RowExclusiveLock | t | (null) | (null) | SELECT ... FOR UPDATE [Session1]
18433 | accounts | RowExclusiveLock | f | Lock | relation | SELECT ... FOR UPDATE [Session2]

-- After Session 1 COMMITs, Session 2 unblocks and returns:
account_id | holder_name | balance
-----------+--------------+---------
1 | Alice Nguyen | 4800.00 <-- sees the updated value
⚠️
Pro Tip: Use lock_timeout in ProductionNever let a transaction block indefinitely in production. Set `SET lock_timeout = '2s';` at the start of any session that uses `FOR UPDATE` or runs DDL. If the lock isn't granted in 2 seconds, you get a clean error instead of a silent pile-up of blocked queries that eventually exhausts your connection pool.

MVCC — How PostgreSQL and MySQL Serve Reads Without Locking

Multi-Version Concurrency Control (MVCC) is the single most important reason modern databases can handle thousands of concurrent users without readers and writers constantly blocking each other. Instead of one authoritative copy of a row, the database keeps multiple versions — each stamped with the transaction ID that created it and (optionally) the one that deleted it.

When you start a transaction in PostgreSQL under READ COMMITTED, the database hands you a snapshot of the world as it existed at that instant. Your queries read from that snapshot. Meanwhile, writers are creating new row versions in the heap. Your snapshot never sees those new versions — you're reading stale-but-consistent data from the past. This is why a plain SELECT never blocks waiting for a writer to finish.

MySQL InnoDB implements MVCC differently: it stores the old row versions in a separate undo log rather than in the heap alongside new versions. Reads below REPEATABLE READ isolate reconstruct old versions by applying undo records backward. This means long-running read transactions don't just hold a snapshot — they hold a reference to undo log chains that InnoDB cannot purge, causing the infamous 'undo log growing unboundedly' problem under heavy write load.

The key insight: MVCC eliminates most read-write lock conflicts, but it does NOT eliminate write-write conflicts. Two transactions trying to UPDATE the same row still need to serialize — MVCC just handles the read side.

mvcc_isolation_levels.sql · SQL
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
-- ============================================================
-- DEMO: MVCC snapshot behavior across isolation levels
-- Demonstrates READ COMMITTED vs REPEATABLE READ in PostgreSQL
-- ============================================================

-- Setup: a product inventory table
CREATE TABLE product_inventory (
    product_id   INT          PRIMARY KEY,
    product_name TEXT         NOT NULL,
    stock_count  INT          NOT NULL,
    last_updated TIMESTAMPTZ  DEFAULT now()
);

INSERT INTO product_inventory VALUES
    (101, 'Wireless Keyboard', 50, now()),
    (102, 'USB-C Hub',         30, now()),
    (103, 'Laptop Stand',      75, now());

-- ============================================================
-- SCENARIO A: READ COMMITTED — snapshot refreshes each statement
-- ============================================================

-- SESSION 1: Start a READ COMMITTED transaction (PostgreSQL default)
BEGIN ISOLATION LEVEL READ COMMITTED;

-- First read: sees stock_count = 50 for keyboard
SELECT product_id, product_name, stock_count
FROM   product_inventory
WHERE  product_id = 101;

-- ---- SESSION 2: Commits an update while Session 1 is still open ----
-- BEGIN;
-- UPDATE product_inventory SET stock_count = 48 WHERE product_id = 101;
-- COMMIT;
-- --------------------------------------------------------------------

-- SESSION 1: Second read in SAME transaction — sees the NEW value 48
-- because READ COMMITTED re-snaps at each statement boundary.
-- This is a non-repeatable read — same query, different result.
SELECT product_id, product_name, stock_count
FROM   product_inventory
WHERE  product_id = 101;

COMMIT;

-- ============================================================
-- SCENARIO B: REPEATABLE READ — snapshot frozen at BEGIN
-- ============================================================

-- SESSION 1: Start a REPEATABLE READ transaction
BEGIN ISOLATION LEVEL REPEATABLE READ;

-- First read: sees stock_count = 48 (current committed value)
SELECT product_id, product_name, stock_count
FROM   product_inventory
WHERE  product_id = 101;

-- ---- SESSION 2: Commits another update ----
-- BEGIN;
-- UPDATE product_inventory SET stock_count = 45 WHERE product_id = 101;
-- COMMIT;
-- -------------------------------------------

-- SESSION 1: Second read — still sees 48, NOT 45
-- The snapshot was frozen at transaction start.
-- This guarantees repeatable reads within the transaction.
SELECT product_id, product_name, stock_count
FROM   product_inventory
WHERE  product_id = 101;

-- Attempting to UPDATE the row that Session 2 already changed:
-- PostgreSQL will detect the write-write conflict and raise:
-- ERROR: could not serialize access due to concurrent update
UPDATE product_inventory
SET    stock_count = stock_count - 1  -- trying to sell one unit
WHERE  product_id = 101;

ROLLBACK;  -- must retry the entire transaction from scratch

-- ============================================================
-- Check the xmin/xmax system columns to see MVCC versions
-- ============================================================
SELECT
    product_id,
    product_name,
    stock_count,
    xmin,   -- transaction ID that INSERTed/last UPDATEd this version
    xmax    -- transaction ID that DELETEd/superseded this version (0 = live)
FROM product_inventory;
▶ Output
-- SCENARIO A: READ COMMITTED
-- First read (before Session 2 commits):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 50

-- Second read (after Session 2 commits):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48 <-- non-repeatable read!

-- SCENARIO B: REPEATABLE READ
-- First read:
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48

-- Second read (Session 2 committed 45, but our snapshot ignores it):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48 <-- still 48, snapshot frozen

-- UPDATE attempt:
ERROR: could not serialize access due to concurrent update

-- xmin/xmax inspection:
product_id | product_name | stock_count | xmin | xmax
-----------+-------------------+-------------+-------+------
101 | Wireless Keyboard | 45 | 50312 | 0
102 | USB-C Hub | 30 | 50290 | 0
103 | Laptop Stand | 75 | 50290 | 0
⚠️
Watch Out: Long Transactions Kill MVCC PerformanceA transaction open for 30 minutes under REPEATABLE READ or SERIALIZABLE holds a snapshot that prevents PostgreSQL's VACUUM from reclaiming dead row versions. The result is table bloat and index bloat that silently degrades query performance. Always close transactions as quickly as possible, and monitor `pg_stat_activity` for idle transactions with `state = 'idle in transaction'`.

Deadlocks — How They Form, How to Detect Them, and How to Prevent Them

A deadlock is two (or more) transactions caught in a circular wait — Transaction A holds a lock that B needs, while B holds a lock that A needs. Neither can proceed. Without intervention, they'd wait forever. Every serious database engine has a deadlock detector that periodically scans the wait-for graph, finds cycles, picks a victim (usually the transaction that's done the least work), rolls it back, and lets the winner proceed.

The most common production deadlock pattern isn't exotic — it's two transactions updating the same two rows in opposite orders. Transaction A locks row 1 then tries to lock row 2. Transaction B locks row 2 then tries to lock row 1. The fix is almost always enforcing a consistent lock ordering: always acquire locks in the same sequence (e.g., always lock the lower account_id first in a money-transfer operation).

A subtler deadlock source is the gap between a SELECT and a subsequent UPDATE. Transaction A reads a row, does application logic for 200ms, then comes back to update it. Transaction B has done the same thing in parallel. When both arrive at the UPDATE, one wins the lock and the other either deadlocks or serializes. Using SELECT ... FOR UPDATE at read time eliminates this gap by acquiring the exclusive lock immediately.

In PostgreSQL, deadlock details land in the server log and in pg_stat_activity. In MySQL InnoDB, SHOW ENGINE INNODB STATUS gives you the full deadlock report with the exact queries and lock wait chains.

deadlock_prevention.sql · SQL
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
-- ============================================================
-- DEMO: Classic deadlock scenario and its prevention
-- Run on PostgreSQL. Uses two accounts for a funds transfer.
-- ============================================================

-- Setup
CREATE TABLE bank_accounts (
    account_id     INT          PRIMARY KEY,
    owner_name     TEXT         NOT NULL,
    balance        NUMERIC(12,2) NOT NULL CHECK (balance >= 0)
);

INSERT INTO bank_accounts VALUES
    (1, 'Alice Nguyen',  10000.00),
    (2, 'Bob Tremblay',   5000.00);

-- ============================================================
-- THE DEADLOCK-PRONE PATTERN
-- Transaction 1: Transfer $100 from Alice (id=1) to Bob (id=2)
-- Transaction 2: Transfer $50  from Bob (id=2) to Alice (id=1)
-- Both start simultaneously — classic deadlock setup
-- ============================================================

-- Transaction 1 (Session 1)
BEGIN;
UPDATE bank_accounts SET balance = balance - 100 WHERE account_id = 1;  -- locks row 1
-- (now pause — Session 2 runs its first UPDATE below)
UPDATE bank_accounts SET balance = balance + 100 WHERE account_id = 2;  -- BLOCKS: waits for row 2
COMMIT;

-- Transaction 2 (Session 2 — run the first line WHILE Session 1 is paused)
BEGIN;
UPDATE bank_accounts SET balance = balance - 50  WHERE account_id = 2;  -- locks row 2
UPDATE bank_accounts SET balance = balance + 50  WHERE account_id = 1;  -- DEADLOCK: waits for row 1
COMMIT;
-- PostgreSQL detects the cycle and raises:
-- ERROR:  deadlock detected
-- DETAIL: Process 18433 waits for ShareLock on transaction 50401;
--         blocked by process 18432.
--         Process 18432 waits for ShareLock on transaction 50402;
--         blocked by process 18433.
-- HINT:   See server log for query details.

-- ============================================================
-- THE DEADLOCK-SAFE PATTERN
-- Always lock rows in ascending account_id order regardless of
-- which direction the transfer flows
-- ============================================================

CREATE OR REPLACE FUNCTION transfer_funds(
    p_from_account_id  INT,
    p_to_account_id    INT,
    p_amount           NUMERIC
) RETURNS VOID AS $$
DECLARE
    v_low_id   INT := LEAST(p_from_account_id, p_to_account_id);
    v_high_id  INT := GREATEST(p_from_account_id, p_to_account_id);
BEGIN
    -- Always lock the LOWER id first — consistent ordering prevents deadlock
    -- regardless of which direction the transfer flows
    PERFORM account_id, balance
    FROM    bank_accounts
    WHERE   account_id IN (v_low_id, v_high_id)
    ORDER BY account_id ASC   -- <-- the critical ordering
    FOR UPDATE;               -- acquires exclusive locks in a deterministic order

    -- Validate sufficient funds
    IF (SELECT balance FROM bank_accounts WHERE account_id = p_from_account_id) < p_amount THEN
        RAISE EXCEPTION 'Insufficient funds in account %', p_from_account_id;
    END IF;

    -- Perform the actual transfer
    UPDATE bank_accounts
    SET    balance = balance - p_amount
    WHERE  account_id = p_from_account_id;

    UPDATE bank_accounts
    SET    balance = balance + p_amount
    WHERE  account_id = p_to_account_id;
END;
$$ LANGUAGE plpgsql;

-- Both transfers now run safely in parallel — no deadlock
SELECT transfer_funds(1, 2, 100);  -- Alice -> Bob
SELECT transfer_funds(2, 1,  50);  -- Bob -> Alice (different session, same time)

-- Verify final balances
SELECT account_id, owner_name, balance FROM bank_accounts ORDER BY account_id;
▶ Output
-- Deadlock-prone pattern: PostgreSQL error in Session 2:
ERROR: deadlock detected
DETAIL: Process 18433 waits for ShareLock on transaction 50401; blocked by process 18432.
Process 18432 waits for ShareLock on transaction 50402; blocked by process 18433.
HINT: See server log for query details.

-- Deadlock-safe pattern: both transfers complete cleanly.
-- Final balances after Alice->Bob $100 and Bob->Alice $50:
account_id | owner_name | balance
-----------+--------------+----------
1 | Alice Nguyen | 9950.00 (started 10000 - 100 + 50)
2 | Bob Tremblay | 5050.00 (started 5000 + 100 - 50)
🔥
Interview Gold: How Does a DB Choose the Deadlock Victim?PostgreSQL picks the transaction that has done the least work (fewest locks held, youngest by transaction ID). MySQL InnoDB uses a cost heuristic based on the number of undo log records — the transaction that has modified fewer rows is rolled back. You can influence InnoDB's choice with `INNODB_LOCK_WEIGHT` or by structuring transactions to do heavy writes last. Always write application code that expects `ERROR: deadlock detected` and retries the entire transaction from scratch — not just the last statement.

Isolation Levels, Lock Escalation, and Choosing the Right Strategy in Production

Isolation levels are your contract with the database about what anomalies you're willing to tolerate in exchange for concurrency. READ UNCOMMITTED (dirty reads allowed) is almost never correct. READ COMMITTED is the PostgreSQL default and the right choice for most OLTP workloads — each statement gets a fresh snapshot, so writes from other committed transactions are visible immediately. REPEATABLE READ prevents non-repeatable reads by freezing the snapshot at transaction start. SERIALIZABLE gives you full correctness at the cost of more aborts under contention.

Lock escalation is a real performance trap in SQL Server and older MySQL configurations. When a single transaction acquires too many row locks (often >5,000 in SQL Server), the engine escalates to a table lock to save memory. This silently destroys concurrency for all other sessions touching that table. The fix is batching large bulk operations into smaller transactions or using table-level hints explicitly.

For high-throughput write-heavy systems, consider Optimistic Locking — don't lock at read time, just check at write time that nobody else modified the row since you read it. Implement this with a version column or a last_modified timestamp. This works brilliantly when conflicts are rare. When conflicts are frequent, optimistic locking creates a storm of retries and is worse than pessimistic locking.

optimistic_locking_pattern.sql · SQL
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091
-- ============================================================
-- DEMO: Optimistic Locking with a version column
-- No explicit locks acquired at read time.
-- Conflict detection happens at UPDATE using the version stamp.
-- ============================================================

CREATE TABLE product_catalog (
    product_id     SERIAL        PRIMARY KEY,
    product_name   TEXT          NOT NULL,
    unit_price     NUMERIC(10,2) NOT NULL,
    row_version    INT           NOT NULL DEFAULT 1,  -- optimistic lock sentinel
    last_modified  TIMESTAMPTZ   DEFAULT now()
);

INSERT INTO product_catalog (product_name, unit_price) VALUES
    ('Mechanical Keyboard', 129.99),
    ('27-inch Monitor',     349.99),
    ('Ergonomic Mouse',      59.99);

-- ============================================================
-- APPLICATION LAYER PATTERN
-- Step 1: Read the row AND capture its row_version
-- Step 2: Do business logic (could take several seconds)
-- Step 3: UPDATE only if row_version is still the same value
--         we read — if not, someone else changed it first
-- ============================================================

-- Step 1: Read the keyboard price (application caches row_version = 1)
SELECT product_id, product_name, unit_price, row_version
FROM   product_catalog
WHERE  product_id = 1;
-- Returns: product_id=1, unit_price=129.99, row_version=1

-- (Imagine a concurrent session updates the price to 119.99 and
--  increments row_version to 2 while our app is thinking)

-- Step 3: Our application tries to set price to 124.99
-- We include WHERE row_version = 1 (the version we read)
UPDATE product_catalog
SET
    unit_price    = 124.99,
    row_version   = row_version + 1,  -- increment the version
    last_modified = now()
WHERE product_id   = 1
  AND row_version  = 1;              -- <-- the optimistic check

-- Check how many rows were actually updated
GET DIAGNOSTICS -- (in PL/pgSQL) or check rowcount in application layer

-- If rowcount = 0, the optimistic lock failed — someone else
-- modified the row between our read and our write.
-- The application must re-read, re-apply its logic, and retry.

-- ============================================================
-- Stored procedure with retry logic for optimistic locking
-- ============================================================
CREATE OR REPLACE FUNCTION update_product_price(
    p_product_id   INT,
    p_new_price    NUMERIC,
    p_read_version INT       -- version the caller observed at read time
) RETURNS TEXT AS $$
DECLARE
    rows_affected INT;
BEGIN
    UPDATE product_catalog
    SET
        unit_price    = p_new_price,
        row_version   = row_version + 1,
        last_modified = now()
    WHERE product_id  = p_product_id
      AND row_version = p_read_version;  -- optimistic conflict check

    GET DIAGNOSTICS rows_affected = ROW_COUNT;

    IF rows_affected = 0 THEN
        -- Return a signal the caller can detect and retry
        RETURN 'CONFLICT: row was modified by another transaction — please re-read and retry';
    END IF;

    RETURN 'OK: price updated successfully';
END;
$$ LANGUAGE plpgsql;

-- Simulate a clean update (correct version)
SELECT update_product_price(2, 329.99, 1);  -- Monitor price drop, correct version

-- Simulate a stale update (wrong version — someone else already updated)
SELECT update_product_price(2, 319.99, 1);  -- version 1 is stale now, it's 2

-- Verify final state
SELECT product_id, product_name, unit_price, row_version FROM product_catalog;
▶ Output
-- Clean update (version matched):
update_product_price
--------------------------------------
OK: price updated successfully

-- Stale update (version mismatch):
update_product_price
------------------------------------------------------
CONFLICT: row was modified by another transaction — please re-read and retry

-- Final state:
product_id | product_name | unit_price | row_version
-----------+---------------------+------------+-------------
1 | Mechanical Keyboard | 124.99 | 2
2 | 27-inch Monitor | 329.99 | 2
3 | Ergonomic Mouse | 59.99 | 1
⚠️
Watch Out: Serializable Isolation Is Not FreePostgreSQL's SSI (Serializable Snapshot Isolation) uses predicate locks to detect read-write conflicts that could produce non-serializable outcomes. Under heavy contention it generates `ERROR: could not serialize access due to concurrent update` errors that your application MUST handle with retry logic. Don't enable SERIALIZABLE hoping it silently 'fixes' your concurrency bugs — it just surfaces them as errors you now have to handle explicitly.
AspectPessimistic Locking (SELECT FOR UPDATE)Optimistic Locking (Version Column)
Lock acquired atRead time — immediately on SELECTWrite time — only on UPDATE
Best forHigh-conflict workloads (financial transfers)Low-conflict workloads (catalog updates, CMS)
Failure modeBlocking wait / timeout / deadlockSilent success with 0 rows affected — needs retry
Throughput impactSerializes concurrent readers of same rowNo blocking reads — higher read throughput
Implementation complexityLow — just add FOR UPDATE to existing SELECTModerate — need version column, retry logic in app
Works across app serversYes — lock lives in the DBYes — version column is DB-side, stateless
Risk of starvationYes — long transactions block everyone behind themLow — no blocking, just retries on conflict
MVCC interactionBypasses MVCC for writes, forces serializationFully compatible with MVCC — reads are never blocked

🎯 Key Takeaways

  • MVCC means plain SELECTs almost never block in PostgreSQL or InnoDB — but write-write conflicts still serialize, and long read transactions bloat the undo log or table heap.
  • Deadlocks are almost always caused by inconsistent lock ordering — always acquire locks on multiple rows in a deterministic sequence (e.g., ascending primary key) and they become nearly impossible.
  • Optimistic locking is strictly better for low-conflict scenarios because it never blocks readers — but it requires application-level retry logic on every write path, which most teams forget to implement.
  • Lock escalation from row-level to table-level is a silent performance killer in SQL Server and legacy MySQL configs — batch large bulk operations and verify lock granularity with your DB's lock diagnostics before touching large tables in production.

⚠ Common Mistakes to Avoid

  • Mistake 1: Holding a transaction open across network calls or user input — Symptom: idle connections pile up in pg_stat_activity with state='idle in transaction', connection pool exhausted, application errors under moderate load — Fix: Restructure so all DB reads happen before the transaction opens, or use application-level caching. Begin the transaction only immediately before the writes and commit immediately after. Never wait for an HTTP response or user click inside an open transaction.
  • Mistake 2: Using SELECT FOR UPDATE on a large range without an index — Symptom: the query escalates to a near-table-lock because it must examine thousands of rows and acquires row locks on all of them, grinding concurrent writes to a halt — Fix: Ensure the WHERE clause uses an indexed column so the lock scope is minimal. Run EXPLAIN (ANALYZE, BUFFERS) to verify an index scan is used, not a sequential scan, before deploying any FOR UPDATE query to production.
  • Mistake 3: Assuming READ COMMITTED prevents phantom reads in MySQL InnoDB — Symptom: a transaction reads a range, another transaction inserts a row into that range and commits, the first transaction re-reads the range and sees the new row (phantom) — leading to incorrect aggregate calculations or double-processing — Fix: Upgrade to REPEATABLE READ (MySQL InnoDB default) which uses gap locks to prevent phantoms, or restructure so range queries are only executed once per transaction. Understand that PostgreSQL's REPEATABLE READ also prevents phantoms via MVCC snapshots but doesn't use gap locks.

Interview Questions on This Topic

  • QExplain the difference between a shared lock and an exclusive lock, and describe a real scenario where using SELECT FOR UPDATE instead of a plain SELECT was the correct choice and why.
  • QWalk me through exactly how a deadlock forms between two transactions doing money transfers, and what specific change to the code would prevent it from ever happening — not just detecting and retrying it.
  • QA senior engineer tells you 'just use SERIALIZABLE isolation and all your concurrency bugs go away.' What's wrong with that statement, and what would you actually need to implement in the application layer to use SERIALIZABLE correctly in production?

Frequently Asked Questions

What is the difference between optimistic and pessimistic locking in a database?

Pessimistic locking acquires an exclusive lock at read time (SELECT FOR UPDATE) so no other transaction can modify the row until you commit. Optimistic locking skips locking at read time and instead checks at write time — using a version column or timestamp — that nobody else changed the row since you read it. Pessimistic is safer under high conflict; optimistic has higher throughput when conflicts are rare.

How does PostgreSQL automatically resolve deadlocks?

PostgreSQL's lock manager runs a cycle-detection algorithm on the wait-for graph. When it finds a cycle, it picks the transaction that has done the least work (fewest locks, youngest transaction ID) and rolls it back with ERROR: deadlock detected. The winning transaction automatically retries and succeeds. Your application must catch this error and retry the entire transaction from the beginning.

Does MVCC mean I never need locks in PostgreSQL?

No. MVCC eliminates lock contention between readers and writers — a SELECT never blocks waiting for an UPDATE to finish. But write-write conflicts (two transactions trying to UPDATE the same row) still require serialization via row-level locks. You still need explicit locking (SELECT FOR UPDATE, advisory locks, or SERIALIZABLE isolation) anywhere two concurrent writes must not interleave.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousRecursive SQL QueriesNext →Apache HBase Basics
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged