PostgreSQL Triggers Explained — Internals, Edge Cases & Production Gotchas
Most applications treat the database as a dumb storage bucket — data goes in, data comes out. But production systems have real cross-cutting concerns: audit trails that must never be skipped, denormalized caches that must stay in sync, business rules that must fire regardless of which application layer touches the data. Triggers are PostgreSQL's answer to this: executable logic that lives inside the database engine itself, tightly coupled to data events, invisible to application code.
How PostgreSQL Executes Triggers — The Internal Firing Order
Before you write a single trigger, you need to understand what the engine actually does when you fire one. PostgreSQL separates trigger execution into four dimensions: timing (BEFORE, AFTER, INSTEAD OF), scope (FOR EACH ROW vs FOR EACH STATEMENT), event (INSERT, UPDATE, DELETE, TRUNCATE), and transition visibility.
When a row-level BEFORE trigger fires, the NEW record hasn't been written to the heap yet. That means you can modify NEW before it lands — this is how validation and auto-stamping work. A row-level AFTER trigger fires after the row is committed to the heap within the transaction, but before the transaction commits to disk. Statement-level triggers fire once per SQL statement regardless of how many rows were affected — even zero rows.
The firing order when multiple triggers exist on the same table and event is alphabetical by trigger name. This isn't a quirk — it's documented and guaranteed. In practice, naming your triggers with numeric prefixes like '10_audit_' and '20_cache_invalidate_' lets you control execution order explicitly without relying on creation order.
-- =========================================================== -- Demonstrate PostgreSQL trigger firing order and NEW visibility -- =========================================================== -- 1. Create a simple orders table CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_name TEXT NOT NULL, total_amount NUMERIC(10,2) NOT NULL, status TEXT NOT NULL DEFAULT 'pending', created_at TIMESTAMPTZ NOT NULL DEFAULT now(), updated_at TIMESTAMPTZ ); -- 2. Create an audit log table to capture every change CREATE TABLE order_audit_log ( log_id SERIAL PRIMARY KEY, order_id INT, event_type TEXT, -- INSERT / UPDATE / DELETE old_status TEXT, -- NULL on INSERT new_status TEXT, -- NULL on DELETE changed_at TIMESTAMPTZ NOT NULL DEFAULT now(), changed_by TEXT NOT NULL -- the DB role that made the change ); -- =========================================================== -- TRIGGER FUNCTION 1: Stamp updated_at before the row is written -- This is a BEFORE trigger — we mutate NEW directly -- =========================================================== CREATE OR REPLACE FUNCTION stamp_updated_at() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- NEW is the incoming row record. Mutating it here changes -- what actually gets written to the heap. NEW.updated_at := now(); -- Returning NEW tells PostgreSQL: "use this (possibly modified) -- row as the actual row to insert/update". -- Returning NULL would cancel the operation entirely. RETURN NEW; END; $$; -- =========================================================== -- TRIGGER FUNCTION 2: Write an audit log entry AFTER the row lands -- This is an AFTER trigger — OLD and NEW are both fully visible -- =========================================================== CREATE OR REPLACE FUNCTION log_order_status_change() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- On INSERT, OLD is NULL — use COALESCE to handle it safely INSERT INTO order_audit_log (order_id, event_type, old_status, new_status, changed_by) VALUES ( COALESCE(NEW.order_id, OLD.order_id), TG_OP, -- built-in variable: 'INSERT','UPDATE','DELETE' OLD.status, -- NULL for INSERT operations NEW.status, -- NULL for DELETE operations current_user -- the authenticated PostgreSQL role ); -- AFTER row triggers must still return NEW (or OLD for DELETE) -- The return value is ignored, but NULL would suppress further -- triggers in the chain — always return the right record. RETURN NEW; END; $$; -- =========================================================== -- Attach triggers — note the naming convention: numeric prefix -- controls alphabetical (therefore execution) order -- =========================================================== -- Fires BEFORE insert or update — stamps the timestamp CREATE TRIGGER orders_10_stamp_updated_at BEFORE INSERT OR UPDATE ON orders FOR EACH ROW EXECUTE FUNCTION stamp_updated_at(); -- Fires AFTER insert, update, or delete — writes the audit log CREATE TRIGGER orders_20_audit_status_change AFTER INSERT OR UPDATE OR DELETE ON orders FOR EACH ROW EXECUTE FUNCTION log_order_status_change(); -- =========================================================== -- TEST IT -- =========================================================== -- Insert a new order INSERT INTO orders (customer_name, total_amount) VALUES ('Alice Johnson', 149.99); -- Update the status UPDATE orders SET status = 'shipped' WHERE customer_name = 'Alice Johnson'; -- Review the audit trail SELECT log_id, order_id, event_type, old_status, new_status, changed_at, changed_by FROM order_audit_log ORDER BY log_id;
--------+----------+------------+------------+------------+-------------------------------+------------
1 | 1 | INSERT | NULL | pending | 2024-03-15 10:22:14.381+00 | postgres
2 | 1 | UPDATE | pending | shipped | 2024-03-15 10:22:14.392+00 | postgres
(2 rows)
INSTEAD OF Triggers on Views — Making Read-Only Views Writable
PostgreSQL views are non-updatable by default the moment they contain a JOIN, aggregation, DISTINCT, or subquery. INSTEAD OF triggers solve this by intercepting the write operation and redirecting it to the underlying base tables manually. The trigger fires in place of the attempted DML — the original operation never touches the view's definition at all.
This pattern is extremely common in reporting databases and data-access layers where you want to expose a clean, denormalized surface to application code while keeping the normalized schema underneath. The application writes to the view as if it were a table; the trigger handles the fan-out.
One subtle difference from regular triggers: INSTEAD OF triggers are always FOR EACH ROW. PostgreSQL doesn't support FOR EACH STATEMENT on views because statement-level transition tables aren't available in view context. Also, INSTEAD OF DELETE must handle the row using OLD, not NEW — NEW doesn't exist for a deletion.
-- =========================================================== -- Writable view pattern using INSTEAD OF triggers -- Use case: e-commerce product catalog with a separate -- inventory table — expose a unified view to the API layer -- =========================================================== CREATE TABLE products ( product_id SERIAL PRIMARY KEY, sku TEXT NOT NULL UNIQUE, product_name TEXT NOT NULL, unit_price NUMERIC(10,2) NOT NULL ); CREATE TABLE inventory ( inventory_id SERIAL PRIMARY KEY, product_id INT NOT NULL REFERENCES products(product_id), warehouse TEXT NOT NULL DEFAULT 'main', stock_qty INT NOT NULL DEFAULT 0, UNIQUE (product_id, warehouse) ); -- The view the application sees — a clean, flat product+stock surface CREATE VIEW product_catalog AS SELECT p.product_id, p.sku, p.product_name, p.unit_price, i.stock_qty, i.warehouse FROM products p JOIN inventory i ON i.product_id = p.product_id; -- =========================================================== -- INSTEAD OF INSERT: fan the write out to both base tables -- =========================================================== CREATE OR REPLACE FUNCTION insert_into_product_catalog() RETURNS TRIGGER LANGUAGE plpgsql AS $$ DECLARE new_product_id INT; BEGIN -- Step 1: insert the core product record and capture its generated PK INSERT INTO products (sku, product_name, unit_price) VALUES (NEW.sku, NEW.product_name, NEW.unit_price) RETURNING product_id INTO new_product_id; -- Step 2: create the corresponding inventory row INSERT INTO inventory (product_id, warehouse, stock_qty) VALUES ( new_product_id, COALESCE(NEW.warehouse, 'main'), -- default to main warehouse COALESCE(NEW.stock_qty, 0) ); -- Return NEW so PostgreSQL knows the operation "succeeded" RETURN NEW; END; $$; -- =========================================================== -- INSTEAD OF UPDATE: update each base table independently -- =========================================================== CREATE OR REPLACE FUNCTION update_product_catalog() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- Only update columns that actually changed (compare OLD vs NEW) UPDATE products SET product_name = NEW.product_name, unit_price = NEW.unit_price WHERE product_id = OLD.product_id; UPDATE inventory SET stock_qty = NEW.stock_qty WHERE product_id = OLD.product_id AND warehouse = OLD.warehouse; RETURN NEW; END; $$; -- Attach both INSTEAD OF triggers to the view CREATE TRIGGER product_catalog_insert INSTEAD OF INSERT ON product_catalog FOR EACH ROW EXECUTE FUNCTION insert_into_product_catalog(); CREATE TRIGGER product_catalog_update INSTEAD OF UPDATE ON product_catalog FOR EACH ROW EXECUTE FUNCTION update_product_catalog(); -- =========================================================== -- TEST: application writes to the view as if it's a table -- =========================================================== INSERT INTO product_catalog (sku, product_name, unit_price, stock_qty, warehouse) VALUES ('SKU-9001', 'Wireless Keyboard', 79.99, 250, 'main'); UPDATE product_catalog SET unit_price = 69.99, stock_qty = 230 WHERE sku = 'SKU-9001'; -- Verify data landed in the correct base tables SELECT p.sku, p.unit_price, i.stock_qty FROM products p JOIN inventory i ON i.product_id = p.product_id WHERE p.sku = 'SKU-9001';
------------+------------+-----------
SKU-9001 | 69.99 | 230
(1 row)
Transition Tables — Statement-Level Triggers That Actually Scale
Row-level triggers have a dirty secret: they don't scale. If you UPDATE 50,000 rows, a FOR EACH ROW trigger fires 50,000 times — 50,000 context switches between the executor and the trigger runtime. For audit logging or cache invalidation, this is catastrophic.
PostgreSQL 10 introduced transition tables (REFERENCING OLD TABLE AS / REFERENCING NEW TABLE AS) for statement-level triggers. These are ephemeral, in-memory result sets containing all the affected rows, queryable like a regular table inside the trigger function. You process the entire batch in a single trigger invocation — one context switch, one INSERT into your audit table, done.
The performance difference is orders of magnitude. In one internal benchmark, moving from row-level audit logging to a statement-level trigger with a transition table reduced audit INSERT time from 4.2 seconds to 0.09 seconds for a 10,000-row bulk update. Transition tables are available for AFTER INSERT, AFTER UPDATE, and AFTER DELETE — not BEFORE triggers, and not INSTEAD OF.
-- =========================================================== -- High-performance bulk audit using transition tables -- Scenario: price update job runs nightly, touching thousands -- of product rows — we need a full audit trail without -- 10,000 individual trigger invocations -- =========================================================== CREATE TABLE product_prices ( product_id INT PRIMARY KEY, sku TEXT NOT NULL, current_price NUMERIC(10,2) NOT NULL, updated_at TIMESTAMPTZ DEFAULT now() ); CREATE TABLE price_change_audit ( audit_id SERIAL PRIMARY KEY, product_id INT, sku TEXT, old_price NUMERIC(10,2), new_price NUMERIC(10,2), price_delta NUMERIC(10,2), -- computed: new - old changed_at TIMESTAMPTZ NOT NULL DEFAULT now(), batch_job_id TEXT -- passed via SET LOCAL ); -- =========================================================== -- Statement-level trigger function using transition tables -- OLD TABLE contains all rows BEFORE the UPDATE -- NEW TABLE contains all rows AFTER the UPDATE -- Both are queryable with standard SQL inside this function -- =========================================================== CREATE OR REPLACE FUNCTION audit_price_changes_bulk() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- Single INSERT joining transition tables — processes ALL -- modified rows in one statement regardless of batch size. -- current_setting() reads a session-level variable set by -- the calling application for traceability. INSERT INTO price_change_audit (product_id, sku, old_price, new_price, price_delta, batch_job_id) SELECT old_rows.product_id, new_rows.sku, old_rows.current_price AS old_price, new_rows.current_price AS new_price, new_rows.current_price - old_rows.current_price AS price_delta, current_setting('app.batch_job_id', true) -- true = return NULL if unset FROM old_table AS old_rows -- transition table: pre-update snapshot JOIN new_table AS new_rows -- transition table: post-update snapshot ON old_rows.product_id = new_rows.product_id WHERE old_rows.current_price <> new_rows.current_price; -- only log actual changes -- Statement-level triggers: return value is always NULL / ignored RETURN NULL; END; $$; -- =========================================================== -- Attach as a statement-level AFTER UPDATE trigger -- REFERENCING declares the transition table aliases -- =========================================================== CREATE TRIGGER product_prices_bulk_audit AFTER UPDATE ON product_prices REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table FOR EACH STATEMENT EXECUTE FUNCTION audit_price_changes_bulk(); -- =========================================================== -- Seed some test data -- =========================================================== INSERT INTO product_prices (product_id, sku, current_price) VALUES (1, 'SKU-001', 29.99), (2, 'SKU-002', 49.99), (3, 'SKU-003', 99.99); -- =========================================================== -- Simulate a nightly batch price update -- The app sets a session variable for traceability -- =========================================================== SET LOCAL app.batch_job_id = 'NIGHTLY-PRICE-JOB-2024-03-15'; UPDATE product_prices SET current_price = current_price * 0.90, -- 10% discount run updated_at = now() WHERE product_id IN (1, 2, 3); -- One trigger invocation processed all 3 rows SELECT product_id, sku, old_price, new_price, price_delta, batch_job_id FROM price_change_audit ORDER BY product_id;
------------+---------+-----------+-----------+-------------+---------------------------------
1 | SKU-001 | 29.99 | 26.99 | -3.00 | NIGHTLY-PRICE-JOB-2024-03-15
2 | SKU-002 | 49.99 | 44.99 | -5.00 | NIGHTLY-PRICE-JOB-2024-03-15
3 | SKU-003 | 99.99 | 89.99 | -10.00 | NIGHTLY-PRICE-JOB-2024-03-15
(3 rows)
Production Gotchas — Trigger Recursion, Deferred Triggers, and the pg_trigger Catalog
Triggers in production introduce failure modes you won't find in documentation tutorials. The most dangerous is mutual recursion: Trigger A updates Table B, which fires Trigger B, which updates Table A, which fires Trigger A again — stack overflow, transaction rollback, very confused DBA at 2am.
PostgreSQL provides two safety valves. First, session_replication_role: setting it to 'replica' disables all triggers that aren't explicitly marked as ALWAYS or REPLICA — useful for bulk data loads. Second, pg_trigger_depth() returns the current nesting level of trigger calls. Checking this at the start of a trigger function lets you conditionally skip execution when you're already inside a trigger chain.
Deferred constraint triggers (CONSTRAINT TRIGGER ... DEFERRABLE INITIALLY DEFERRED) are another production power tool. They fire at COMMIT time rather than at statement time — perfect for referential integrity checks that would fail mid-transaction but resolve before commit. Finally, pg_trigger in the system catalog is your observability layer: query it to see every trigger on every table, its function, its timing, and whether it's enabled.
-- =========================================================== -- Pattern 1: Guard against trigger recursion using pg_trigger_depth() -- Scenario: updating a 'categories' table that recalculates -- a materialized path — which itself triggers the same function -- =========================================================== CREATE TABLE categories ( category_id SERIAL PRIMARY KEY, parent_id INT REFERENCES categories(category_id), category_name TEXT NOT NULL, full_path TEXT -- e.g. 'Electronics > Laptops > Gaming' ); CREATE OR REPLACE FUNCTION rebuild_category_path() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- pg_trigger_depth() = 0 means we're the outermost trigger call. -- If > 0, we're inside a recursive trigger chain — skip execution -- to prevent infinite recursion without raising an error. IF pg_trigger_depth() > 1 THEN RETURN NEW; END IF; -- Build the full path by recursively walking parent_id WITH RECURSIVE ancestor_path AS ( -- Base case: start from the current row's parent SELECT parent_id, category_name, 1 AS depth FROM categories WHERE category_id = NEW.parent_id UNION ALL -- Recursive case: walk up to the root SELECT c.parent_id, c.category_name, ap.depth + 1 FROM categories c JOIN ancestor_path ap ON ap.parent_id = c.category_id ) SELECT string_agg(category_name, ' > ' ORDER BY depth DESC) || ' > ' || NEW.category_name INTO NEW.full_path FROM ancestor_path; -- If no parent exists, the category IS the root NEW.full_path := COALESCE(NEW.full_path, NEW.category_name); RETURN NEW; END; $$; CREATE TRIGGER categories_rebuild_path BEFORE INSERT OR UPDATE OF parent_id, category_name ON categories FOR EACH ROW EXECUTE FUNCTION rebuild_category_path(); -- =========================================================== -- Pattern 2: Deferred constraint trigger for referential -- integrity that needs to survive mid-transaction inconsistency -- Scenario: batch import that inserts child rows before parents -- =========================================================== CREATE TABLE departments ( dept_id SERIAL PRIMARY KEY, dept_name TEXT NOT NULL ); CREATE TABLE employees ( employee_id SERIAL PRIMARY KEY, full_name TEXT NOT NULL, dept_id INT -- intentionally no FK — trigger enforces it ); CREATE OR REPLACE FUNCTION check_dept_exists() RETURNS TRIGGER LANGUAGE plpgsql AS $$ BEGIN -- This check runs at COMMIT time, not statement time, -- so mid-transaction inconsistency is acceptable. IF NOT EXISTS ( SELECT 1 FROM departments WHERE dept_id = NEW.dept_id ) THEN RAISE EXCEPTION 'dept_id % does not exist in departments table', NEW.dept_id; END IF; RETURN NEW; END; $$; -- CONSTRAINT TRIGGER + DEFERRABLE: validation fires at end of transaction CREATE CONSTRAINT TRIGGER employees_check_dept_deferred AFTER INSERT OR UPDATE OF dept_id ON employees DEFERRABLE INITIALLY DEFERRED FOR EACH ROW EXECUTE FUNCTION check_dept_exists(); -- =========================================================== -- Pattern 3: Query pg_trigger to audit every trigger in the DB -- =========================================================== SELECT t.tgname AS trigger_name, c.relname AS table_name, p.proname AS function_name, CASE t.tgtype & 2 WHEN 2 THEN 'BEFORE' ELSE 'AFTER' END AS timing, CASE t.tgtype & 1 WHEN 1 THEN 'ROW' ELSE 'STATEMENT' END AS scope, CASE WHEN t.tgenabled = 'O' THEN 'ENABLED' WHEN t.tgenabled = 'D' THEN 'DISABLED' WHEN t.tgenabled = 'R' THEN 'REPLICA' WHEN t.tgenabled = 'A' THEN 'ALWAYS' END AS status FROM pg_trigger t JOIN pg_class c ON c.oid = t.tgrelid JOIN pg_proc p ON p.oid = t.tgfoid WHERE NOT t.tgisinternal -- exclude FK constraint triggers ORDER BY c.relname, t.tgname;
-----------------------------------+------------------+---------------------------------+----------+-----------+---------
categories_rebuild_path | categories | rebuild_category_path | BEFORE | ROW | ENABLED
employees_check_dept_deferred | employees | check_dept_exists | AFTER | ROW | ENABLED
orders_10_stamp_updated_at | orders | stamp_updated_at | BEFORE | ROW | ENABLED
orders_20_audit_status_change | orders | log_order_status_change | AFTER | ROW | ENABLED
product_prices_bulk_audit | product_prices | audit_price_changes_bulk | AFTER | STATEMENT | ENABLED
(5 rows)
| Dimension | FOR EACH ROW | FOR EACH STATEMENT |
|---|---|---|
| Invocation count per 10K-row UPDATE | 10,000 separate calls | 1 single call |
| Access to individual OLD/NEW values | Yes — OLD.col, NEW.col | No — use transition tables instead |
| Transition table support | Not available | Available (PostgreSQL 10+) |
| Can modify the row being written | Yes — mutate NEW in BEFORE trigger | No — statement already determined |
| Performance on bulk DML | Catastrophic at scale (context switches) | Excellent — single execution |
| Use case sweet spot | Validation, auto-stamping, per-row logic | Bulk audit, cache invalidation, aggregates |
| INSTEAD OF trigger support | Yes (views only) | No — not supported |
| Fires when 0 rows are affected | No | Yes — always fires once |
| Can suppress the DML (return NULL) | Yes in BEFORE triggers | No — return value is ignored |
🎯 Key Takeaways
- BEFORE row triggers modify NEW before it hits the heap — perfect for validation and auto-stamping. AFTER row triggers see the committed row state — use them for side effects like audit logging.
- FOR EACH ROW triggers fire once per affected row: at 50,000 rows that's 50,000 context switches. Transition tables (REFERENCING OLD TABLE / NEW TABLE) let you process the entire batch in a single AFTER STATEMENT trigger invocation — orders of magnitude faster.
- INSTEAD OF triggers on views intercept writes before they hit the view engine — this is the production-grade pattern for keeping complex JOINed views writable without exposing the underlying normalized schema.
- Always query pg_trigger to audit your trigger landscape — check tgenabled values, because SET session_replication_role = 'replica' silently disables normal triggers, and ALTER TRIGGER ... ENABLE ALWAYS is the escape hatch for audit triggers that must never be skipped.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Using FOR EACH ROW for bulk audit logging — Symptom: a batch UPDATE of 50,000 rows takes 45 seconds instead of 0.5 seconds; pg_stat_activity shows the session spinning in the trigger function — Fix: replace with a FOR EACH STATEMENT trigger using REFERENCING OLD TABLE / NEW TABLE transition tables and a single INSERT...SELECT to process all rows at once.
- ✕Mistake 2: Forgetting that RETURN NULL in a BEFORE row trigger silently cancels the write — Symptom: your INSERT or UPDATE returns 'UPDATE 0' or 'INSERT 0 0' with no error, data never reaches the table, and the application has no idea anything went wrong — Fix: always end BEFORE row triggers with RETURN NEW for INSERT/UPDATE and RETURN OLD for DELETE; only return NULL when you deliberately want to veto the operation and you've documented that intent.
- ✕Mistake 3: Creating a trigger that fires on its own table's UPDATE, then doing an UPDATE inside the trigger function without a recursion guard — Symptom: ERROR: stack depth limit exceeded after the trigger calls itself hundreds of times until PostgreSQL hits max_stack_depth and rolls back the transaction — Fix: add an IF pg_trigger_depth() > 1 THEN RETURN NEW; END IF; guard at the top of the function, or restructure so the trigger updates a different column than the one it fires on using UPDATE ... WHERE column using the OF column_name syntax.
Interview Questions on This Topic
- QWhat's the difference between a BEFORE trigger returning NULL versus raising an exception — and when would you choose each approach?
- QYou have a trigger that writes to an audit table. A junior dev suggests making it BEFORE INSERT instead of AFTER INSERT so the audit record is guaranteed to exist even if the main insert fails. What's wrong with this reasoning, and what's the correct approach?
- QWalk me through exactly what happens — step by step, including internal PostgreSQL machinery — when you execute UPDATE products SET price = price * 0.9 on a table that has both a BEFORE row trigger and an AFTER statement trigger with a transition table.
Frequently Asked Questions
Can a PostgreSQL trigger call another trigger?
Yes — and it happens automatically. If Trigger A on Table A runs an UPDATE on Table B, and Table B has its own trigger, that trigger fires immediately within the same transaction. This is called cascading or nested triggers. Use pg_trigger_depth() inside your function to detect the nesting level and add a guard if recursion is possible.
What language can I write PostgreSQL trigger functions in?
PL/pgSQL is the default and most common choice, but PostgreSQL supports any trusted procedural language installed as an extension: PL/Python (plpython3u), PL/Perl, PL/Tcl, and even PL/v8 (JavaScript). The trigger function must return TRIGGER as its return type regardless of the language used.
What's the difference between a PostgreSQL trigger and a rule?
Rules (CREATE RULE) rewrite the query before execution at the planner level — they're older, harder to debug, and have confusing semantics around statement-level visibility. Triggers fire at execution time and have access to the actual row data. The PostgreSQL documentation itself recommends preferring triggers over rules for almost all practical use cases, and rules are largely considered a legacy feature.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.