Database Locking Mechanisms Explained — Internals, Deadlocks & Production Gotchas
Every production database under real traffic is a battlefield of competing transactions — hundreds of reads and writes colliding every second. Without a disciplined locking strategy, two transactions can read the same account balance, both decide to deduct money, and you end up with funds that vanished into thin air. This is not a hypothetical. It's the kind of bug that costs companies millions and surfaces only under high concurrency — exactly the conditions your monitoring never catches in staging.
Database locking mechanisms are the referee in that battlefield. They define the rules for who gets to read data, who gets to modify it, in what order, and what happens when two transactions want mutually incompatible things at the same time. Understanding them at the internals level — not just 'shared lock good, exclusive lock bad' — is what separates engineers who write correct concurrent code from those who ship subtle data-corruption bugs disguised as race conditions.
By the end of this article you'll understand exactly how shared, exclusive, intent, and row-level locks work internally, how MVCC sidesteps many locking problems, how to read a deadlock graph and resolve it, and the specific isolation level choices that make or break performance in high-throughput systems. You'll also see runnable examples that demonstrate real lock contention and how to diagnose it live in PostgreSQL and MySQL.
Lock Types Demystified — Shared, Exclusive, and the Intent Hierarchy
At the lowest level, every database lock falls into one of two fundamental modes: Shared (S) and Exclusive (X). A Shared lock says 'I'm reading this — others can read too, but nobody rewrites it while I'm looking.' An Exclusive lock says 'I'm writing this — nobody reads or writes until I'm done.' These two modes form the compatibility matrix that every lock manager checks before granting a request.
But modern databases don't just lock individual rows — they lock at multiple granularities: table, page, and row. To avoid scanning every row lock to decide whether a table lock is safe, databases use Intent locks. An Intent Shared (IS) lock on a table means 'some rows in here have Shared locks.' An Intent Exclusive (IX) lock means 'some rows in here have Exclusive locks.' A Shared Intent Exclusive (SIX) lock is the brutal one: the whole table is being read, plus some rows are being modified.
This hierarchy is why a full-table ALTER TABLE can block every single concurrent INSERT — it needs an AccessExclusive lock that's incompatible with every other mode, including IS. Knowing this stops you from running schema migrations on a busy table without precautions like lock_timeout and retries.
-- ============================================================ -- DEMO: Observe lock acquisition and compatibility in PostgreSQL -- Run each block in a separate psql session as indicated -- ============================================================ -- Setup: create a simple accounts table CREATE TABLE accounts ( account_id SERIAL PRIMARY KEY, holder_name TEXT NOT NULL, balance NUMERIC(12,2) NOT NULL CHECK (balance >= 0) ); INSERT INTO accounts (holder_name, balance) VALUES ('Alice Nguyen', 5000.00), ('Bob Tremblay', 3200.00), ('Carol Okafor', 12000.00); -- ============================================================ -- SESSION 1: Acquire a row-level Exclusive lock via FOR UPDATE -- ============================================================ BEGIN; -- FOR UPDATE places a RowExclusiveLock on the table and an -- ExclusiveLock on the matching row's heap tuple. -- No other transaction can UPDATE or DELETE this row until we COMMIT. SELECT account_id, holder_name, balance FROM accounts WHERE account_id = 1 FOR UPDATE; -- <-- this is the exclusive row lock -- Simulate a slow business operation (do NOT commit yet) -- Leave this transaction open and switch to Session 2 -- ============================================================ -- SESSION 2 (run while Session 1 is still open): Try to read -- ============================================================ -- Plain SELECT — succeeds immediately because PostgreSQL MVCC -- serves this from a consistent snapshot, no lock needed SELECT account_id, holder_name, balance FROM accounts WHERE account_id = 1; -- returns immediately with old value -- FOR UPDATE on the same row — this BLOCKS because Session 1 -- already holds the exclusive row lock SELECT account_id, holder_name, balance FROM accounts WHERE account_id = 1 FOR UPDATE; -- <-- will hang until Session 1 commits or rolls back -- ============================================================ -- SESSION 3: Inspect active locks while Session 2 is blocked -- ============================================================ SELECT pid, -- process ID of the backend relation::regclass AS locked_object, mode, -- lock mode requested or held granted, -- FALSE means it's waiting wait_event_type, wait_event, query FROM pg_locks JOIN pg_stat_activity USING (pid) WHERE relation = 'accounts'::regclass ORDER BY granted DESC, pid; -- ============================================================ -- SESSION 1: Now commit to release the lock -- ============================================================ UPDATE accounts SET balance = balance - 200.00 WHERE account_id = 1; -- deduct from Alice's account COMMIT; -- Session 2's blocked FOR UPDATE will now proceed
account_id | holder_name | balance
-----------+--------------+---------
1 | Alice Nguyen | 5000.00
-- Session 3 pg_locks query (Session 2 is blocked):
pid | locked_object | mode | granted | wait_event_type | wait_event | query
-------+---------------+---------------------+---------+-----------------+------------+----------------------------------
18432 | accounts | RowExclusiveLock | t | (null) | (null) | SELECT ... FOR UPDATE [Session1]
18433 | accounts | RowExclusiveLock | f | Lock | relation | SELECT ... FOR UPDATE [Session2]
-- After Session 1 COMMITs, Session 2 unblocks and returns:
account_id | holder_name | balance
-----------+--------------+---------
1 | Alice Nguyen | 4800.00 <-- sees the updated value
MVCC — How PostgreSQL and MySQL Serve Reads Without Locking
Multi-Version Concurrency Control (MVCC) is the single most important reason modern databases can handle thousands of concurrent users without readers and writers constantly blocking each other. Instead of one authoritative copy of a row, the database keeps multiple versions — each stamped with the transaction ID that created it and (optionally) the one that deleted it.
When you start a transaction in PostgreSQL under READ COMMITTED, the database hands you a snapshot of the world as it existed at that instant. Your queries read from that snapshot. Meanwhile, writers are creating new row versions in the heap. Your snapshot never sees those new versions — you're reading stale-but-consistent data from the past. This is why a plain SELECT never blocks waiting for a writer to finish.
MySQL InnoDB implements MVCC differently: it stores the old row versions in a separate undo log rather than in the heap alongside new versions. Reads below REPEATABLE READ isolate reconstruct old versions by applying undo records backward. This means long-running read transactions don't just hold a snapshot — they hold a reference to undo log chains that InnoDB cannot purge, causing the infamous 'undo log growing unboundedly' problem under heavy write load.
The key insight: MVCC eliminates most read-write lock conflicts, but it does NOT eliminate write-write conflicts. Two transactions trying to UPDATE the same row still need to serialize — MVCC just handles the read side.
-- ============================================================ -- DEMO: MVCC snapshot behavior across isolation levels -- Demonstrates READ COMMITTED vs REPEATABLE READ in PostgreSQL -- ============================================================ -- Setup: a product inventory table CREATE TABLE product_inventory ( product_id INT PRIMARY KEY, product_name TEXT NOT NULL, stock_count INT NOT NULL, last_updated TIMESTAMPTZ DEFAULT now() ); INSERT INTO product_inventory VALUES (101, 'Wireless Keyboard', 50, now()), (102, 'USB-C Hub', 30, now()), (103, 'Laptop Stand', 75, now()); -- ============================================================ -- SCENARIO A: READ COMMITTED — snapshot refreshes each statement -- ============================================================ -- SESSION 1: Start a READ COMMITTED transaction (PostgreSQL default) BEGIN ISOLATION LEVEL READ COMMITTED; -- First read: sees stock_count = 50 for keyboard SELECT product_id, product_name, stock_count FROM product_inventory WHERE product_id = 101; -- ---- SESSION 2: Commits an update while Session 1 is still open ---- -- BEGIN; -- UPDATE product_inventory SET stock_count = 48 WHERE product_id = 101; -- COMMIT; -- -------------------------------------------------------------------- -- SESSION 1: Second read in SAME transaction — sees the NEW value 48 -- because READ COMMITTED re-snaps at each statement boundary. -- This is a non-repeatable read — same query, different result. SELECT product_id, product_name, stock_count FROM product_inventory WHERE product_id = 101; COMMIT; -- ============================================================ -- SCENARIO B: REPEATABLE READ — snapshot frozen at BEGIN -- ============================================================ -- SESSION 1: Start a REPEATABLE READ transaction BEGIN ISOLATION LEVEL REPEATABLE READ; -- First read: sees stock_count = 48 (current committed value) SELECT product_id, product_name, stock_count FROM product_inventory WHERE product_id = 101; -- ---- SESSION 2: Commits another update ---- -- BEGIN; -- UPDATE product_inventory SET stock_count = 45 WHERE product_id = 101; -- COMMIT; -- ------------------------------------------- -- SESSION 1: Second read — still sees 48, NOT 45 -- The snapshot was frozen at transaction start. -- This guarantees repeatable reads within the transaction. SELECT product_id, product_name, stock_count FROM product_inventory WHERE product_id = 101; -- Attempting to UPDATE the row that Session 2 already changed: -- PostgreSQL will detect the write-write conflict and raise: -- ERROR: could not serialize access due to concurrent update UPDATE product_inventory SET stock_count = stock_count - 1 -- trying to sell one unit WHERE product_id = 101; ROLLBACK; -- must retry the entire transaction from scratch -- ============================================================ -- Check the xmin/xmax system columns to see MVCC versions -- ============================================================ SELECT product_id, product_name, stock_count, xmin, -- transaction ID that INSERTed/last UPDATEd this version xmax -- transaction ID that DELETEd/superseded this version (0 = live) FROM product_inventory;
-- First read (before Session 2 commits):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 50
-- Second read (after Session 2 commits):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48 <-- non-repeatable read!
-- SCENARIO B: REPEATABLE READ
-- First read:
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48
-- Second read (Session 2 committed 45, but our snapshot ignores it):
product_id | product_name | stock_count
-----------+-------------------+-------------
101 | Wireless Keyboard | 48 <-- still 48, snapshot frozen
-- UPDATE attempt:
ERROR: could not serialize access due to concurrent update
-- xmin/xmax inspection:
product_id | product_name | stock_count | xmin | xmax
-----------+-------------------+-------------+-------+------
101 | Wireless Keyboard | 45 | 50312 | 0
102 | USB-C Hub | 30 | 50290 | 0
103 | Laptop Stand | 75 | 50290 | 0
Deadlocks — How They Form, How to Detect Them, and How to Prevent Them
A deadlock is two (or more) transactions caught in a circular wait — Transaction A holds a lock that B needs, while B holds a lock that A needs. Neither can proceed. Without intervention, they'd wait forever. Every serious database engine has a deadlock detector that periodically scans the wait-for graph, finds cycles, picks a victim (usually the transaction that's done the least work), rolls it back, and lets the winner proceed.
The most common production deadlock pattern isn't exotic — it's two transactions updating the same two rows in opposite orders. Transaction A locks row 1 then tries to lock row 2. Transaction B locks row 2 then tries to lock row 1. The fix is almost always enforcing a consistent lock ordering: always acquire locks in the same sequence (e.g., always lock the lower account_id first in a money-transfer operation).
A subtler deadlock source is the gap between a SELECT and a subsequent UPDATE. Transaction A reads a row, does application logic for 200ms, then comes back to update it. Transaction B has done the same thing in parallel. When both arrive at the UPDATE, one wins the lock and the other either deadlocks or serializes. Using SELECT ... FOR UPDATE at read time eliminates this gap by acquiring the exclusive lock immediately.
In PostgreSQL, deadlock details land in the server log and in pg_stat_activity. In MySQL InnoDB, SHOW ENGINE INNODB STATUS gives you the full deadlock report with the exact queries and lock wait chains.
-- ============================================================ -- DEMO: Classic deadlock scenario and its prevention -- Run on PostgreSQL. Uses two accounts for a funds transfer. -- ============================================================ -- Setup CREATE TABLE bank_accounts ( account_id INT PRIMARY KEY, owner_name TEXT NOT NULL, balance NUMERIC(12,2) NOT NULL CHECK (balance >= 0) ); INSERT INTO bank_accounts VALUES (1, 'Alice Nguyen', 10000.00), (2, 'Bob Tremblay', 5000.00); -- ============================================================ -- THE DEADLOCK-PRONE PATTERN -- Transaction 1: Transfer $100 from Alice (id=1) to Bob (id=2) -- Transaction 2: Transfer $50 from Bob (id=2) to Alice (id=1) -- Both start simultaneously — classic deadlock setup -- ============================================================ -- Transaction 1 (Session 1) BEGIN; UPDATE bank_accounts SET balance = balance - 100 WHERE account_id = 1; -- locks row 1 -- (now pause — Session 2 runs its first UPDATE below) UPDATE bank_accounts SET balance = balance + 100 WHERE account_id = 2; -- BLOCKS: waits for row 2 COMMIT; -- Transaction 2 (Session 2 — run the first line WHILE Session 1 is paused) BEGIN; UPDATE bank_accounts SET balance = balance - 50 WHERE account_id = 2; -- locks row 2 UPDATE bank_accounts SET balance = balance + 50 WHERE account_id = 1; -- DEADLOCK: waits for row 1 COMMIT; -- PostgreSQL detects the cycle and raises: -- ERROR: deadlock detected -- DETAIL: Process 18433 waits for ShareLock on transaction 50401; -- blocked by process 18432. -- Process 18432 waits for ShareLock on transaction 50402; -- blocked by process 18433. -- HINT: See server log for query details. -- ============================================================ -- THE DEADLOCK-SAFE PATTERN -- Always lock rows in ascending account_id order regardless of -- which direction the transfer flows -- ============================================================ CREATE OR REPLACE FUNCTION transfer_funds( p_from_account_id INT, p_to_account_id INT, p_amount NUMERIC ) RETURNS VOID AS $$ DECLARE v_low_id INT := LEAST(p_from_account_id, p_to_account_id); v_high_id INT := GREATEST(p_from_account_id, p_to_account_id); BEGIN -- Always lock the LOWER id first — consistent ordering prevents deadlock -- regardless of which direction the transfer flows PERFORM account_id, balance FROM bank_accounts WHERE account_id IN (v_low_id, v_high_id) ORDER BY account_id ASC -- <-- the critical ordering FOR UPDATE; -- acquires exclusive locks in a deterministic order -- Validate sufficient funds IF (SELECT balance FROM bank_accounts WHERE account_id = p_from_account_id) < p_amount THEN RAISE EXCEPTION 'Insufficient funds in account %', p_from_account_id; END IF; -- Perform the actual transfer UPDATE bank_accounts SET balance = balance - p_amount WHERE account_id = p_from_account_id; UPDATE bank_accounts SET balance = balance + p_amount WHERE account_id = p_to_account_id; END; $$ LANGUAGE plpgsql; -- Both transfers now run safely in parallel — no deadlock SELECT transfer_funds(1, 2, 100); -- Alice -> Bob SELECT transfer_funds(2, 1, 50); -- Bob -> Alice (different session, same time) -- Verify final balances SELECT account_id, owner_name, balance FROM bank_accounts ORDER BY account_id;
ERROR: deadlock detected
DETAIL: Process 18433 waits for ShareLock on transaction 50401; blocked by process 18432.
Process 18432 waits for ShareLock on transaction 50402; blocked by process 18433.
HINT: See server log for query details.
-- Deadlock-safe pattern: both transfers complete cleanly.
-- Final balances after Alice->Bob $100 and Bob->Alice $50:
account_id | owner_name | balance
-----------+--------------+----------
1 | Alice Nguyen | 9950.00 (started 10000 - 100 + 50)
2 | Bob Tremblay | 5050.00 (started 5000 + 100 - 50)
Isolation Levels, Lock Escalation, and Choosing the Right Strategy in Production
Isolation levels are your contract with the database about what anomalies you're willing to tolerate in exchange for concurrency. READ UNCOMMITTED (dirty reads allowed) is almost never correct. READ COMMITTED is the PostgreSQL default and the right choice for most OLTP workloads — each statement gets a fresh snapshot, so writes from other committed transactions are visible immediately. REPEATABLE READ prevents non-repeatable reads by freezing the snapshot at transaction start. SERIALIZABLE gives you full correctness at the cost of more aborts under contention.
Lock escalation is a real performance trap in SQL Server and older MySQL configurations. When a single transaction acquires too many row locks (often >5,000 in SQL Server), the engine escalates to a table lock to save memory. This silently destroys concurrency for all other sessions touching that table. The fix is batching large bulk operations into smaller transactions or using table-level hints explicitly.
For high-throughput write-heavy systems, consider Optimistic Locking — don't lock at read time, just check at write time that nobody else modified the row since you read it. Implement this with a version column or a last_modified timestamp. This works brilliantly when conflicts are rare. When conflicts are frequent, optimistic locking creates a storm of retries and is worse than pessimistic locking.
-- ============================================================ -- DEMO: Optimistic Locking with a version column -- No explicit locks acquired at read time. -- Conflict detection happens at UPDATE using the version stamp. -- ============================================================ CREATE TABLE product_catalog ( product_id SERIAL PRIMARY KEY, product_name TEXT NOT NULL, unit_price NUMERIC(10,2) NOT NULL, row_version INT NOT NULL DEFAULT 1, -- optimistic lock sentinel last_modified TIMESTAMPTZ DEFAULT now() ); INSERT INTO product_catalog (product_name, unit_price) VALUES ('Mechanical Keyboard', 129.99), ('27-inch Monitor', 349.99), ('Ergonomic Mouse', 59.99); -- ============================================================ -- APPLICATION LAYER PATTERN -- Step 1: Read the row AND capture its row_version -- Step 2: Do business logic (could take several seconds) -- Step 3: UPDATE only if row_version is still the same value -- we read — if not, someone else changed it first -- ============================================================ -- Step 1: Read the keyboard price (application caches row_version = 1) SELECT product_id, product_name, unit_price, row_version FROM product_catalog WHERE product_id = 1; -- Returns: product_id=1, unit_price=129.99, row_version=1 -- (Imagine a concurrent session updates the price to 119.99 and -- increments row_version to 2 while our app is thinking) -- Step 3: Our application tries to set price to 124.99 -- We include WHERE row_version = 1 (the version we read) UPDATE product_catalog SET unit_price = 124.99, row_version = row_version + 1, -- increment the version last_modified = now() WHERE product_id = 1 AND row_version = 1; -- <-- the optimistic check -- Check how many rows were actually updated GET DIAGNOSTICS -- (in PL/pgSQL) or check rowcount in application layer -- If rowcount = 0, the optimistic lock failed — someone else -- modified the row between our read and our write. -- The application must re-read, re-apply its logic, and retry. -- ============================================================ -- Stored procedure with retry logic for optimistic locking -- ============================================================ CREATE OR REPLACE FUNCTION update_product_price( p_product_id INT, p_new_price NUMERIC, p_read_version INT -- version the caller observed at read time ) RETURNS TEXT AS $$ DECLARE rows_affected INT; BEGIN UPDATE product_catalog SET unit_price = p_new_price, row_version = row_version + 1, last_modified = now() WHERE product_id = p_product_id AND row_version = p_read_version; -- optimistic conflict check GET DIAGNOSTICS rows_affected = ROW_COUNT; IF rows_affected = 0 THEN -- Return a signal the caller can detect and retry RETURN 'CONFLICT: row was modified by another transaction — please re-read and retry'; END IF; RETURN 'OK: price updated successfully'; END; $$ LANGUAGE plpgsql; -- Simulate a clean update (correct version) SELECT update_product_price(2, 329.99, 1); -- Monitor price drop, correct version -- Simulate a stale update (wrong version — someone else already updated) SELECT update_product_price(2, 319.99, 1); -- version 1 is stale now, it's 2 -- Verify final state SELECT product_id, product_name, unit_price, row_version FROM product_catalog;
update_product_price
--------------------------------------
OK: price updated successfully
-- Stale update (version mismatch):
update_product_price
------------------------------------------------------
CONFLICT: row was modified by another transaction — please re-read and retry
-- Final state:
product_id | product_name | unit_price | row_version
-----------+---------------------+------------+-------------
1 | Mechanical Keyboard | 124.99 | 2
2 | 27-inch Monitor | 329.99 | 2
3 | Ergonomic Mouse | 59.99 | 1
| Aspect | Pessimistic Locking (SELECT FOR UPDATE) | Optimistic Locking (Version Column) |
|---|---|---|
| Lock acquired at | Read time — immediately on SELECT | Write time — only on UPDATE |
| Best for | High-conflict workloads (financial transfers) | Low-conflict workloads (catalog updates, CMS) |
| Failure mode | Blocking wait / timeout / deadlock | Silent success with 0 rows affected — needs retry |
| Throughput impact | Serializes concurrent readers of same row | No blocking reads — higher read throughput |
| Implementation complexity | Low — just add FOR UPDATE to existing SELECT | Moderate — need version column, retry logic in app |
| Works across app servers | Yes — lock lives in the DB | Yes — version column is DB-side, stateless |
| Risk of starvation | Yes — long transactions block everyone behind them | Low — no blocking, just retries on conflict |
| MVCC interaction | Bypasses MVCC for writes, forces serialization | Fully compatible with MVCC — reads are never blocked |
🎯 Key Takeaways
- MVCC means plain SELECTs almost never block in PostgreSQL or InnoDB — but write-write conflicts still serialize, and long read transactions bloat the undo log or table heap.
- Deadlocks are almost always caused by inconsistent lock ordering — always acquire locks on multiple rows in a deterministic sequence (e.g., ascending primary key) and they become nearly impossible.
- Optimistic locking is strictly better for low-conflict scenarios because it never blocks readers — but it requires application-level retry logic on every write path, which most teams forget to implement.
- Lock escalation from row-level to table-level is a silent performance killer in SQL Server and legacy MySQL configs — batch large bulk operations and verify lock granularity with your DB's lock diagnostics before touching large tables in production.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Holding a transaction open across network calls or user input — Symptom: idle connections pile up in
pg_stat_activitywith state='idle in transaction', connection pool exhausted, application errors under moderate load — Fix: Restructure so all DB reads happen before the transaction opens, or use application-level caching. Begin the transaction only immediately before the writes and commit immediately after. Never wait for an HTTP response or user click inside an open transaction. - ✕Mistake 2: Using SELECT FOR UPDATE on a large range without an index — Symptom: the query escalates to a near-table-lock because it must examine thousands of rows and acquires row locks on all of them, grinding concurrent writes to a halt — Fix: Ensure the WHERE clause uses an indexed column so the lock scope is minimal. Run EXPLAIN (ANALYZE, BUFFERS) to verify an index scan is used, not a sequential scan, before deploying any FOR UPDATE query to production.
- ✕Mistake 3: Assuming READ COMMITTED prevents phantom reads in MySQL InnoDB — Symptom: a transaction reads a range, another transaction inserts a row into that range and commits, the first transaction re-reads the range and sees the new row (phantom) — leading to incorrect aggregate calculations or double-processing — Fix: Upgrade to REPEATABLE READ (MySQL InnoDB default) which uses gap locks to prevent phantoms, or restructure so range queries are only executed once per transaction. Understand that PostgreSQL's REPEATABLE READ also prevents phantoms via MVCC snapshots but doesn't use gap locks.
Interview Questions on This Topic
- QExplain the difference between a shared lock and an exclusive lock, and describe a real scenario where using SELECT FOR UPDATE instead of a plain SELECT was the correct choice and why.
- QWalk me through exactly how a deadlock forms between two transactions doing money transfers, and what specific change to the code would prevent it from ever happening — not just detecting and retrying it.
- QA senior engineer tells you 'just use SERIALIZABLE isolation and all your concurrency bugs go away.' What's wrong with that statement, and what would you actually need to implement in the application layer to use SERIALIZABLE correctly in production?
Frequently Asked Questions
What is the difference between optimistic and pessimistic locking in a database?
Pessimistic locking acquires an exclusive lock at read time (SELECT FOR UPDATE) so no other transaction can modify the row until you commit. Optimistic locking skips locking at read time and instead checks at write time — using a version column or timestamp — that nobody else changed the row since you read it. Pessimistic is safer under high conflict; optimistic has higher throughput when conflicts are rare.
How does PostgreSQL automatically resolve deadlocks?
PostgreSQL's lock manager runs a cycle-detection algorithm on the wait-for graph. When it finds a cycle, it picks the transaction that has done the least work (fewest locks, youngest transaction ID) and rolls it back with ERROR: deadlock detected. The winning transaction automatically retries and succeeds. Your application must catch this error and retry the entire transaction from the beginning.
Does MVCC mean I never need locks in PostgreSQL?
No. MVCC eliminates lock contention between readers and writers — a SELECT never blocks waiting for an UPDATE to finish. But write-write conflicts (two transactions trying to UPDATE the same row) still require serialization via row-level locks. You still need explicit locking (SELECT FOR UPDATE, advisory locks, or SERIALIZABLE isolation) anywhere two concurrent writes must not interleave.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.