Debugging PL/SQL with DBMS_OUTPUT, Exceptions & SQL Developer
- DBMS_OUTPUT is a client-dependent, session-scoped buffer β without SET SERVEROUTPUT ON, every PUT_LINE is a silent no-op that still consumes PGA memory
- SQL Developer breakpoints require three things simultaneously: debug privileges, debug-compiled code, and the same database connection for execution β if any one is missing, breakpoints silently fail
- Structured logging with autonomous transactions is the only debugging tool that works for production, triggers, scheduler jobs, and external application sessions
- DBMS_OUTPUT is the simplest PL/SQL debugger β but only works when the client explicitly enables buffer retrieval
- SQL Developer's built-in debugger supports breakpoints, variable inspection, and step-through execution against a live database session
- Exception-driven debugging uses strategic RAISE points to narrow failure scope without any client dependency
- The debugger attaches to the database session β not the code file β so connection context determines whether breakpoints fire
- Production insight: DBMS_OUTPUT buffers silently in PGA memory if the client never calls DBMS_OUTPUT.GET_LINES β the buffer grows until the session ends or ORA-20000 fires
- Biggest mistake: leaving DBMS_OUTPUT.PUT_LINE calls in production code β they consume PGA memory on every execution and provide zero value without a connected client reading the buffer
Production Incident
Production Debug GuideFrom symptom to root cause using the right debugging tool
PL/SQL debugging requires tools that work inside the database execution context. Unlike application code where you attach a debugger to a running process on your machine, PL/SQL executes on the database server and returns results to the client. Every debugging tool must bridge this server-client boundary, and understanding how each tool bridges it determines whether your debugging session produces answers or wastes hours.
DBMS_OUTPUT is the most widely used PL/SQL debugging tool and the most misunderstood. Engineers add DBMS_OUTPUT.PUT_LINE calls, execute the procedure, see no output, and conclude the code is not executing. The output is buffered silently in the session's PGA memory until the client explicitly retrieves it. Without the client-side enablement step β SET SERVEROUTPUT ON in SQL*Plus or the DBMS Output panel in SQL Developer β every DBMS_OUTPUT.PUT_LINE call is a silent no-op that allocates memory and produces nothing visible.
SQL Developer provides an interactive debugger that supports breakpoints, variable inspection, call stack navigation, watch expressions, and step-through execution. It requires a direct database connection, specific debug privileges, and a DEBUG-compiled package. The debugger attaches to the database session, not the code file β understanding this distinction is critical for debugging procedures that are called by other procedures, triggers, or scheduler jobs.
Structured logging and exception-driven debugging are the production-grade alternatives. They persist diagnostic information in database tables, require no client connection, survive transaction rollbacks via autonomous transactions, and can be enabled or disabled at runtime through configurable log levels. This guide covers all four approaches, when to use each, and the mistakes that cost the most time.
DBMS_OUTPUT: The Simplest Debugger
DBMS_OUTPUT is Oracle's built-in print debugging mechanism. PUT_LINE writes text to a session-level buffer in PGA memory. The client retrieves the buffer contents via GET_LINES after the PL/SQL block completes. The critical misunderstanding is that DBMS_OUTPUT.PUT_LINE does not write to the screen β it writes to a server-side buffer that the client must explicitly read.
The client enablement step is mandatory. In SQL*Plus, run SET SERVEROUTPUT ON SIZE UNLIMITED before executing the PL/SQL block. In SQL Developer, open the DBMS Output panel (View > DBMS Output), click the green plus icon, and select your connection. Without this step, every DBMS_OUTPUT.PUT_LINE call is a silent no-op β the code runs, PGA memory is allocated for the buffer, but nothing is displayed.
The default buffer size varies by Oracle version but is limited. Exceeding the limit raises ORA-20000: ORU-10027: buffer overflow, which terminates the PL/SQL block. In Oracle 10.2 and later, calling DBMS_OUTPUT.ENABLE(buffer_size => NULL) removes the limit, but the buffer still accumulates in PGA memory. For long-running procedures that produce extensive output, the buffer can grow to hundreds of megabytes β this is the mechanism behind the production incident described earlier.
DBMS_OUTPUT output is available only after the PL/SQL block completes. You cannot see output line-by-line during execution β the entire buffer is flushed to the client at the end. This means you cannot use DBMS_OUTPUT for real-time progress monitoring during a long-running batch. For real-time visibility, use DBMS_APPLICATION_INFO.SET_MODULE and SET_ACTION, which update V$SESSION immediately and are visible to other sessions.
The conditional debug pattern wraps all DBMS_OUTPUT calls behind a boolean flag. When the flag is FALSE (the production default), the IF check short-circuits and PUT_LINE is never called β no buffer allocation, no PGA consumption. When the flag is TRUE (during debugging), the output flows normally. This pattern is the minimum viable safety net if DBMS_OUTPUT must exist in the codebase at all.
-- Step 1: Enable DBMS_OUTPUT in the client -- SQL*Plus: SET SERVEROUTPUT ON SIZE UNLIMITED -- SQL Developer: View > DBMS Output > click green (+) > select connection -- Step 2: Basic debugging with DBMS_OUTPUT CREATE OR REPLACE PROCEDURE app_schema.test_procedure IS v_count NUMBER; v_name VARCHAR2(100); BEGIN DBMS_OUTPUT.PUT_LINE('=== test_procedure started ==='); DBMS_OUTPUT.PUT_LINE('Timestamp: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS.FF3')); SELECT COUNT(*) INTO v_count FROM data_owner.accounts WHERE status = 'ACTIVE'; DBMS_OUTPUT.PUT_LINE('Active accounts: ' || v_count); SELECT customer_name INTO v_name FROM data_owner.customers WHERE customer_id = 1001; DBMS_OUTPUT.PUT_LINE('Customer name: ' || v_name); DBMS_OUTPUT.PUT_LINE('=== test_procedure completed ==='); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('ERROR: ' || SQLERRM); DBMS_OUTPUT.PUT_LINE('Backtrace: ' || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE); RAISE; END test_procedure; / -- Step 3: Conditional debug output (avoids production overhead) CREATE OR REPLACE PACKAGE app_schema.debug_pkg AS -- Session-level flag: FALSE in production, TRUE during debugging g_debug_enabled BOOLEAN := FALSE; PROCEDURE enable_debug; PROCEDURE disable_debug; PROCEDURE log(p_message IN VARCHAR2); PROCEDURE log(p_label IN VARCHAR2, p_value IN VARCHAR2); PROCEDURE log(p_label IN VARCHAR2, p_value IN NUMBER); PROCEDURE log(p_label IN VARCHAR2, p_value IN DATE); END debug_pkg; / CREATE OR REPLACE PACKAGE BODY app_schema.debug_pkg AS PROCEDURE enable_debug IS BEGIN g_debug_enabled := TRUE; DBMS_OUTPUT.ENABLE(buffer_size => NULL); -- Unlimited buffer END enable_debug; PROCEDURE disable_debug IS BEGIN g_debug_enabled := FALSE; END disable_debug; PROCEDURE log(p_message IN VARCHAR2) IS BEGIN IF g_debug_enabled THEN DBMS_OUTPUT.PUT_LINE( TO_CHAR(SYSTIMESTAMP, 'HH24:MI:SS.FF3') || ' | ' || p_message ); END IF; END log; PROCEDURE log(p_label IN VARCHAR2, p_value IN VARCHAR2) IS BEGIN IF g_debug_enabled THEN DBMS_OUTPUT.PUT_LINE( TO_CHAR(SYSTIMESTAMP, 'HH24:MI:SS.FF3') || ' | ' || p_label || ' = ' || NVL(p_value, '<NULL>') ); END IF; END log; PROCEDURE log(p_label IN VARCHAR2, p_value IN NUMBER) IS BEGIN log(p_label, TO_CHAR(p_value)); END log; PROCEDURE log(p_label IN VARCHAR2, p_value IN DATE) IS BEGIN log(p_label, TO_CHAR(p_value, 'YYYY-MM-DD HH24:MI:SS')); END log; END debug_pkg; / -- Usage: enable debug for a session, run the procedure, disable BEGIN app_schema.debug_pkg.enable_debug; app_schema.test_procedure; app_schema.debug_pkg.disable_debug; END; /
- PUT_LINE writes to a session-level PGA buffer β it does not write to the screen or any file
- The client must call GET_LINES (or enable SERVEROUTPUT) to retrieve the buffer contents after the PL/SQL block completes
- Without client enablement, the buffer fills silently β ORA-20000: ORU-10027 when it overflows
- Output is available only after the PL/SQL block completes β no real-time line-by-line visibility during execution
- Buffer limit defaults vary by version; use DBMS_OUTPUT.ENABLE(NULL) or SET SERVEROUTPUT ON SIZE UNLIMITED for unlimited
- DBMS_OUTPUT in production consumes PGA memory per session with zero debugging value β remove before deployment
SQL Developer Interactive Debugger
SQL Developer provides a full interactive debugger for PL/SQL that supports breakpoints, variable inspection, call stack navigation, watch expressions, and step-through execution. The debugger attaches to a live database session and intercepts execution when a breakpoint is hit β similar to debugging in any IDE, but with the critical difference that execution happens on the database server, not locally.
The debugger requires two privileges granted to the debugging user: DEBUG CONNECT SESSION (to attach the debugger to the session) and DEBUG ANY PROCEDURE (to debug procedures owned by other schemas). Without these privileges, the debugger silently fails to attach and breakpoints are ignored without any error message. This is the single most common reason breakpoints do not fire.
The package or procedure must be compiled with debug symbols before breakpoints work. Run ALTER PACKAGE pkg_name COMPILE DEBUG BODY to include debug information in the compiled object. Without debug symbols, the debugger cannot map the execution point to a source line, and breakpoints are silently skipped. By default, packages are compiled without debug symbols β you must explicitly request them.
The debugger attaches to the database session, not the code file open in the editor. This means you must execute the code within the same connection that the debugger is monitoring. If you open the package in one SQL Developer worksheet and run the calling code in a different worksheet β which may be using a different database connection β the breakpoints will not fire because the execution is happening in a different session. Verify by checking the connection name in the worksheet tab.
For procedures invoked by triggers, scheduler jobs, or external applications, the SQL Developer debugger cannot attach directly because those executions happen in separate sessions that the IDE does not control. For those scenarios, use structured logging to a table or exception-driven debugging with checkpoint assertions.
After debugging, always recompile the package without debug symbols: ALTER PACKAGE pkg_name COMPILE BODY. Debug-compiled code carries additional metadata that increases the compiled object size and can marginally affect execution performance. Production code should never carry debug symbols.
-- Step 1: Grant debug privileges to the debugging user -- Run as SYS or a DBA user GRANT DEBUG CONNECT SESSION TO app_user; GRANT DEBUG ANY PROCEDURE TO app_user; -- Step 2: Compile the package body with debug symbols ALTER PACKAGE app_schema.billing_pkg COMPILE DEBUG BODY; -- Note: COMPILE DEBUG BODY compiles only the body with debug info -- Use COMPILE DEBUG to compile both spec and body -- Step 3: Verify the package is VALID after debug compilation SELECT object_name, object_type, status, last_ddl_time FROM user_objects WHERE object_name = 'BILLING_PKG' ORDER BY object_type; -- Both PACKAGE and PACKAGE BODY should show VALID -- Step 4: Verify debug info is present (Oracle 12c+) SELECT owner, name, type, plsql_debug FROM all_plsql_object_settings WHERE name = 'BILLING_PKG' AND owner = 'APP_SCHEMA'; -- plsql_debug = TRUE indicates debug symbols are present -- Step 5: SQL Developer debugger workflow -- 1. Open the package body in SQL Developer's editor -- 2. Click in the line number gutter to set a breakpoint (red dot appears) -- 3. Right-click the procedure name and select "Debug" (or Ctrl+Shift+F10) -- 4. Enter parameter values in the Debug PL/SQL dialog -- 5. Click OK β execution starts and pauses at the first breakpoint -- 6. Navigation controls: -- Step Into (F7) : enter the called procedure -- Step Over (F8) : execute the line without entering sub-calls -- Step Out (F9) : finish the current procedure and return to caller -- Run to Cursor : execute until the cursor position -- Resume (F9) : continue to the next breakpoint -- Terminate (Ctrl+F2) : stop debugging immediately -- 7. Inspect variables in the Smart Data panel (auto-populated) -- 8. Add watch expressions in the Watches panel for computed values -- 9. View the call stack in the Stack panel to see the calling chain -- Step 6: Recompile WITHOUT debug symbols after debugging ALTER PACKAGE app_schema.billing_pkg COMPILE BODY; -- Production code should never carry debug symbols -- Step 7: Verify debug info is removed SELECT owner, name, type, plsql_debug FROM all_plsql_object_settings WHERE name = 'BILLING_PKG' AND owner = 'APP_SCHEMA'; -- plsql_debug should now be FALSE
- Grant DEBUG CONNECT SESSION and DEBUG ANY PROCEDURE to the debugging user β without these, breakpoints silently fail
- Compile with ALTER PACKAGE pkg COMPILE DEBUG BODY before setting breakpoints β no debug symbols means no breakpoints
- Execute the code in the SAME connection that the debugger is attached to β a different worksheet may use a different session
- After debugging, recompile without DEBUG to remove debug overhead: ALTER PACKAGE pkg COMPILE BODY
- For trigger/job/external-app debugging, use structured logging to a table β the IDE debugger cannot attach to those sessions
Structured Logging: Production-Grade Debug Output
DBMS_OUTPUT is a development tool. It has no place in production code. Production debugging requires a structured logging package that writes to a persistent table with configurable log levels, timestamps, module identifiers, and session context.
The logging table captures everything DBMS_OUTPUT cannot: log level (DEBUG, INFO, WARN, ERROR), the module and procedure name, the message, the full error stack when applicable, the database session ID, the connected user, and a precise timestamp. Engineers query the table after the fact to trace execution flow β no client enablement required, no PGA memory consumed by an ever-growing buffer, no dependency on a connected client.
Log levels enable runtime filtering without code changes. In production, only WARN and ERROR are logged by default. During incident response, a DBA or on-call engineer can temporarily lower the threshold to DEBUG for a specific session by calling the set_level procedure. This captures detailed execution flow for the duration of the investigation without affecting other sessions and without redeploying code. When the investigation is complete, the level is restored to WARN.
The logging package must use an autonomous transaction for the INSERT. This is non-negotiable. If the calling transaction rolls back β which it will during error scenarios, the exact situations where you most need the log entry β a non-autonomous INSERT rolls back with it and the log entry is lost. PRAGMA AUTONOMOUS_TRANSACTION ensures the log entry is committed independently of the calling transaction.
Equally important: the logging package must handle its own exceptions internally. A failure inside the logger β a full tablespace, a constraint violation, any unexpected error β must never propagate to the calling procedure. The caller's business logic must succeed or fail on its own merits, not because the logging infrastructure had a problem. Wrap the INSERT in a BEGIN/EXCEPTION/END block that silently swallows any logging failure.
-- Logging table CREATE TABLE app_schema.app_log ( log_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, log_level VARCHAR2(10) NOT NULL, module VARCHAR2(128) NOT NULL, message VARCHAR2(4000), error_stack VARCHAR2(4000), session_id NUMBER DEFAULT SYS_CONTEXT('USERENV', 'SESSIONID'), db_user VARCHAR2(128) DEFAULT SYS_CONTEXT('USERENV', 'SESSION_USER'), os_user VARCHAR2(128) DEFAULT SYS_CONTEXT('USERENV', 'OS_USER'), created_at TIMESTAMP DEFAULT SYSTIMESTAMP ) TABLESPACE app_data; CREATE INDEX idx_app_log_module ON app_schema.app_log(module, created_at); CREATE INDEX idx_app_log_level ON app_schema.app_log(log_level, created_at); CREATE INDEX idx_app_log_created ON app_schema.app_log(created_at); -- Partition by day for easy cleanup (Oracle 12c+) -- ALTER TABLE app_schema.app_log MODIFY PARTITION BY RANGE (created_at) -- INTERVAL (NUMTODSINTERVAL(1, 'DAY')) -- (PARTITION p_initial VALUES LESS THAN (TIMESTAMP '2026-01-01 00:00:00')); -- Logging package CREATE OR REPLACE PACKAGE app_schema.logger_pkg AS -- Log level constants c_debug CONSTANT VARCHAR2(10) := 'DEBUG'; c_info CONSTANT VARCHAR2(10) := 'INFO'; c_warn CONSTANT VARCHAR2(10) := 'WARN'; c_error CONSTANT VARCHAR2(10) := 'ERROR'; -- Convenience procedures PROCEDURE debug(p_module IN VARCHAR2, p_message IN VARCHAR2); PROCEDURE info (p_module IN VARCHAR2, p_message IN VARCHAR2); PROCEDURE warn (p_module IN VARCHAR2, p_message IN VARCHAR2); PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2); PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2, p_error_stack IN VARCHAR2); -- Runtime log level control (per session) PROCEDURE set_level(p_level IN VARCHAR2); FUNCTION get_level RETURN VARCHAR2; END logger_pkg; / CREATE OR REPLACE PACKAGE BODY app_schema.logger_pkg AS -- Session-level minimum log level β WARN by default in production g_min_level VARCHAR2(10) := c_warn; FUNCTION level_priority(p_level IN VARCHAR2) RETURN NUMBER IS BEGIN RETURN CASE UPPER(p_level) WHEN 'DEBUG' THEN 10 WHEN 'INFO' THEN 20 WHEN 'WARN' THEN 30 WHEN 'ERROR' THEN 40 ELSE 0 END; END level_priority; -- Core writer β autonomous transaction, self-protecting PROCEDURE write_log( p_level IN VARCHAR2, p_module IN VARCHAR2, p_message IN VARCHAR2, p_error_stack IN VARCHAR2 DEFAULT NULL ) IS PRAGMA AUTONOMOUS_TRANSACTION; BEGIN -- Skip if below session threshold IF level_priority(p_level) < level_priority(g_min_level) THEN RETURN; END IF; INSERT INTO app_schema.app_log ( log_level, module, message, error_stack ) VALUES ( p_level, SUBSTR(p_module, 1, 128), SUBSTR(p_message, 1, 4000), SUBSTR(p_error_stack, 1, 4000) ); COMMIT; EXCEPTION WHEN OTHERS THEN -- Logging must NEVER crash the caller ROLLBACK; END write_log; PROCEDURE debug(p_module IN VARCHAR2, p_message IN VARCHAR2) IS BEGIN write_log(c_debug, p_module, p_message); END; PROCEDURE info(p_module IN VARCHAR2, p_message IN VARCHAR2) IS BEGIN write_log(c_info, p_module, p_message); END; PROCEDURE warn(p_module IN VARCHAR2, p_message IN VARCHAR2) IS BEGIN write_log(c_warn, p_module, p_message); END; PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2) IS BEGIN write_log(c_error, p_module, p_message); END; PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2, p_error_stack IN VARCHAR2) IS BEGIN write_log(c_error, p_module, p_message, p_error_stack); END; PROCEDURE set_level(p_level IN VARCHAR2) IS BEGIN g_min_level := UPPER(p_level); END set_level; FUNCTION get_level RETURN VARCHAR2 IS BEGIN RETURN g_min_level; END get_level; END logger_pkg; / -- Usage in application code CREATE OR REPLACE PROCEDURE app_schema.process_batch( p_batch_id IN NUMBER ) IS v_count NUMBER; BEGIN app_schema.logger_pkg.info('process_batch', 'Starting batch ' || p_batch_id); SELECT COUNT(*) INTO v_count FROM data_owner.orders WHERE batch_id = p_batch_id AND status = 'PENDING'; app_schema.logger_pkg.debug('process_batch', 'Pending orders: ' || v_count); -- ... batch processing logic ... app_schema.logger_pkg.info('process_batch', 'Completed batch ' || p_batch_id || ', processed ' || v_count || ' orders'); EXCEPTION WHEN OTHERS THEN app_schema.logger_pkg.error( 'process_batch', 'Failed batch ' || p_batch_id, DBMS_UTILITY.FORMAT_ERROR_STACK || CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE ); RAISE; END process_batch; / -- Enable DEBUG logging for incident investigation BEGIN app_schema.logger_pkg.set_level('DEBUG'); END; / -- Query the log table during or after investigation SELECT log_level, module, message, created_at FROM app_schema.app_log WHERE module = 'process_batch' AND created_at > SYSTIMESTAMP - INTERVAL '1' HOUR ORDER BY created_at; -- Restore WARN level after investigation BEGIN app_schema.logger_pkg.set_level('WARN'); END; / -- Cleanup: purge old log entries (run daily via scheduler) DELETE FROM app_schema.app_log WHERE created_at < SYSTIMESTAMP - INTERVAL '30' DAY; COMMIT;
- DBMS_OUTPUT: client-dependent, session-scoped, PGA-resident, lost on disconnect, invisible to other sessions
- Structured logging: client-independent, persistent in a table, committed via autonomous transaction, queryable by anyone with SELECT
- DBMS_OUTPUT disappears when the session ends β structured logging persists until explicitly purged
- Autonomous transactions ensure log entries survive caller rollback β critical for error logging in failed transactions
- Log levels avoid production overhead β DEBUG is off by default, enabled per-session only during incident response
Exception-Driven Debugging: Strategic RAISE Points
Exception-driven debugging uses strategic RAISE_APPLICATION_ERROR calls at key checkpoints to narrow the scope of a failure. Instead of scattering DBMS_OUTPUT calls everywhere and hoping you see the output, you embed assertions that fail loudly with diagnostic context whenever the execution state diverges from what the code expects.
The pattern is simple: at each major checkpoint in the procedure, validate the expected state. If the state is unexpected β a NULL parameter, a wrong status, a row count that should be 1 but is 0 or 5 β raise a descriptive error that includes the checkpoint identifier, the expected value, and the actual value. When the error propagates through the PL/SQL call stack, each ORA-06512 entry in the stack trace tells you exactly which procedure and line the error passed through, and the RAISE_APPLICATION_ERROR message tells you exactly what was wrong.
This approach has three advantages over DBMS_OUTPUT debugging. First, the error message is always captured β in the application's error log, in the database alert log, in the client's error handler. No client enablement is required. Second, the checkpoint errors integrate naturally with the ORA-06512 stack trace, providing a breadcrumb trail through the call chain. Third, checkpoints serve double duty: they are debugging aids during development and input validation guards in production. A well-placed checkpoint catches bad data before it corrupts downstream state.
The checkpoints should be placed at four boundaries: input validation (are the parameters non-null and within valid ranges?), pre-query state (does the prerequisite data exist in the expected state?), post-query state (did the query return the expected number and type of results?), and post-action verification (did the DML affect the expected number of rows?).
Use RAISE_APPLICATION_ERROR with the TRUE flag as the third parameter to preserve the original error stack when re-raising inside an exception handler. Without TRUE, the original stack is discarded and replaced with the location of the RAISE_APPLICATION_ERROR call, which destroys diagnostic information.
-- Exception-driven debugging with strategic checkpoints CREATE OR REPLACE PROCEDURE app_schema.process_payment( p_account_id IN NUMBER, p_amount IN NUMBER ) IS v_balance NUMBER; v_status VARCHAR2(20); v_rowcount NUMBER; BEGIN -- Checkpoint 1: Input validation IF p_account_id IS NULL THEN RAISE_APPLICATION_ERROR( -20001, 'CP1 FAILED: p_account_id is NULL' ); END IF; IF p_amount IS NULL OR p_amount <= 0 THEN RAISE_APPLICATION_ERROR( -20001, 'CP1 FAILED: p_amount=' || NVL(TO_CHAR(p_amount), 'NULL') || ' (expected positive number)' ); END IF; -- Checkpoint 2: Account exists and is in the expected state BEGIN SELECT balance, status INTO v_balance, v_status FROM data_owner.accounts WHERE account_id = p_account_id; EXCEPTION WHEN NO_DATA_FOUND THEN RAISE_APPLICATION_ERROR( -20002, 'CP2 FAILED: account_id=' || p_account_id || ' not found in accounts table' ); END; IF v_status != 'ACTIVE' THEN RAISE_APPLICATION_ERROR( -20002, 'CP2 FAILED: account ' || p_account_id || ' status=' || v_status || ' (expected ACTIVE)' ); END IF; -- Checkpoint 3: Sufficient balance for the operation IF v_balance < p_amount THEN RAISE_APPLICATION_ERROR( -20003, 'CP3 FAILED: account ' || p_account_id || ' balance=' || v_balance || ' < requested amount=' || p_amount ); END IF; -- Checkpoint 4: DML affected exactly one row UPDATE data_owner.accounts SET balance = balance - p_amount, last_updated = SYSTIMESTAMP WHERE account_id = p_account_id AND status = 'ACTIVE'; v_rowcount := SQL%ROWCOUNT; IF v_rowcount != 1 THEN RAISE_APPLICATION_ERROR( -20004, 'CP4 FAILED: expected 1 row updated, got ' || v_rowcount || ' for account ' || p_account_id || ' (possible concurrent modification or status change between SELECT and UPDATE)' ); END IF; -- Log success app_schema.logger_pkg.info( 'process_payment', 'Payment processed: account=' || p_account_id || ', amount=' || p_amount || ', new_balance=' || (v_balance - p_amount) ); EXCEPTION WHEN OTHERS THEN -- Log the error with full stack, then re-raise app_schema.logger_pkg.error( 'process_payment', 'Failed: account=' || p_account_id || ', amount=' || p_amount || ' | ' || SQLERRM, DBMS_UTILITY.FORMAT_ERROR_STACK || CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE ); RAISE; -- Preserve the original error for the caller END process_payment; /
- CP1 (input): validate parameters are non-null and within valid ranges before any database operation
- CP2 (pre-query): verify prerequisite data exists and is in the expected state before acting on it
- CP3 (post-query): verify query results match expectations before using them in downstream logic
- CP4 (post-action): verify DML affected the expected number of rows β catches concurrent modifications and race conditions
- Each checkpoint error includes expected value, actual value, and full context β the ORA-06512 stack becomes a diagnostic breadcrumb trail
Tracing Execution with DBMS_TRACE and DBMS_HPROF
DBMS_TRACE is Oracle's built-in execution tracing facility for PL/SQL. It records every line executed, every exception raised, and every call/return event. The trace output is written to the PLSQL_TRACE_EVENTS table (after running the tracetab.sql setup script) or to a server-side trace file, depending on the Oracle version and configuration. No code changes are required β you enable tracing, execute the code, and analyze the results.
DBMS_TRACE is more powerful than DBMS_OUTPUT because it captures execution flow automatically. Enable tracing before executing the PL/SQL block, and every line executed is recorded. This is invaluable for understanding execution flow in complex procedures with many conditional branches where you are not sure which path the code takes.
DBMS_TRACE has two primary modes: trace_all_lines records every line executed (complete visibility but large output and significant overhead), and trace_all_exceptions records only when exceptions are raised (smaller output, lower overhead, focused on error paths). For most debugging scenarios, trace_all_exceptions is sufficient and far less expensive.
DBMS_HPROF (Hierarchical Profiler), available since Oracle 11g, provides execution time profiling. It records the time spent in each PL/SQL function and procedure, identifies hotspots, and generates a call tree. This is the right tool when the problem is performance β a procedure that runs but takes too long β rather than correctness β a procedure that raises an error.
The overhead of DBMS_TRACE with trace_all_lines is significant β typically 2x to 10x slowdown depending on the procedure's complexity. DBMS_HPROF has lower overhead because it samples at the subprogram level, not the line level. Neither should be enabled in production outside a controlled maintenance window or on a targeted individual session.
-- Prerequisites: run the trace table setup script (once per database) -- @$ORACLE_HOME/rdbms/admin/tracetab.sql -- Step 1: Compile the target package with debug symbols (required for line-level trace) ALTER PACKAGE app_schema.billing_pkg COMPILE DEBUG BODY; -- Step 2: Enable trace β all lines executed BEGIN DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_lines); END; / -- Step 3: Execute the procedure to trace BEGIN app_schema.billing_pkg.process_invoice(1001); END; / -- Step 4: Disable tracing BEGIN DBMS_TRACE.CLEAR_PLSQL_TRACE; END; / -- Step 5: Query the trace results SELECT event_seq, event_unit_owner, event_unit, event_line, event_kind, event_time FROM sys.plsql_trace_events WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs) ORDER BY event_seq; -- event_kind: 38 = line executed, 54 = exception raised, 51 = call, 52 = return -- Step 6: Trace exceptions only (lighter weight) BEGIN DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_exceptions); END; / BEGIN app_schema.billing_pkg.process_invoice(1001); END; / BEGIN DBMS_TRACE.CLEAR_PLSQL_TRACE; END; / -- Query only exception events SELECT event_seq, event_unit_owner || '.' || event_unit AS unit, event_line, event_kind FROM sys.plsql_trace_events WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs) AND event_kind = 54 -- Exception raised ORDER BY event_seq; -- Step 7: DBMS_HPROF β Hierarchical Profiler (Oracle 11g+) -- Profile a procedure to find performance hotspots DECLARE v_runid NUMBER; BEGIN DBMS_HPROF.START_PROFILING('PLSHPROF_DIR', 'billing_profile.trc'); app_schema.billing_pkg.process_invoice(1001); DBMS_HPROF.STOP_PROFILING; -- Analyze the profile data into reporting tables v_runid := DBMS_HPROF.ANALYZE( location => 'PLSHPROF_DIR', filename => 'billing_profile.trc' ); DBMS_OUTPUT.PUT_LINE('Profile run ID: ' || v_runid); END; / -- Query the profiler results SELECT owner, module, function AS subprogram, line#, subtree_elapsed_time AS total_us, function_elapsed_time AS self_us, calls FROM dbmshp_function_info WHERE runid = (SELECT MAX(runid) FROM dbmshp_runs) ORDER BY function_elapsed_time DESC; -- Shows which subprograms consume the most time -- Step 8: Recompile without debug symbols after tracing ALTER PACKAGE app_schema.billing_pkg COMPILE BODY; -- Step 9: Clean up old trace data DELETE FROM sys.plsql_trace_events WHERE runid < (SELECT MAX(runid) - 5 FROM sys.plsql_trace_runs); DELETE FROM sys.plsql_trace_runs WHERE runid < (SELECT MAX(runid) - 5 FROM sys.plsql_trace_runs); COMMIT;
Debugging Workflow: Choosing the Right Tool
Each PL/SQL debugging tool has a specific use case. Using the wrong tool wastes time β either because the tool cannot surface the information you need, or because the overhead is disproportionate to the problem.
DBMS_OUTPUT is appropriate for quick, interactive debugging in a development environment where you are connected via SQL*Plus or SQL Developer and can see the output immediately after execution. It is inappropriate for production, for batch processes that run without a connected client, and for any procedure that processes more than a handful of transactions.
The SQL Developer interactive debugger is appropriate for step-through debugging of complex logic where you need to inspect variable values at specific points in execution. It requires a direct connection, debug privileges, and debug-compiled code. It is inappropriate for debugging triggers, scheduler jobs, or procedures called by external applications that use their own sessions.
Structured logging is appropriate for production debugging, batch process monitoring, and any scenario where the debugging session must survive session termination, transaction rollback, or client disconnection. It is the only tool that works for triggers, scheduler jobs, and external application calls. The overhead is minimal β a single INSERT per log entry via autonomous transaction.
Exception-driven debugging with checkpoints is appropriate for input validation, state verification, and narrowing failure scope in procedures that process data through multiple stages. Checkpoints serve double duty as debugging aids and production guards. They are always captured in the error stack and require no client enablement.
DBMS_TRACE and DBMS_HPROF are appropriate for detailed execution flow analysis and performance profiling in development or controlled investigation environments. They are inappropriate for production due to overhead.
The decision tree: Can you reproduce the issue interactively? Use the SQL Developer debugger. Cannot reproduce but can run in development? Use DBMS_TRACE. Issue is in production and you cannot reproduce it? Use structured logging. Need to verify data state at specific points? Use checkpoint assertions. Quick check during development? DBMS_OUTPUT is fine.
-- Decision Tree: Which debugging tool to use -- -- Q1: Is the issue reproducible interactively in SQL Developer? -- YES -> Use SQL Developer interactive debugger (breakpoints + variable inspection) -- NO -> Go to Q2 -- -- Q2: Is the issue in production or triggered by a trigger/job/external app? -- YES -> Use structured logging (logger_pkg) + checkpoint assertions -- NO -> Go to Q3 -- -- Q3: Do you need to understand the execution path through branches? -- YES -> Use DBMS_TRACE (trace_all_lines in dev, trace_all_exceptions in staging) -- NO -> Go to Q4 -- -- Q4: Is the problem performance (slow) rather than correctness (wrong/error)? -- YES -> Use DBMS_HPROF for subprogram-level time profiling -- NO -> Go to Q5 -- -- Q5: Quick check during development with a connected client? -- YES -> DBMS_OUTPUT with conditional debug flag -- NO -> Use structured logging (it works everywhere) -- Example: combining tools for a complex debugging session -- 1. Enable structured logging at DEBUG level for the session BEGIN app_schema.logger_pkg.set_level('DEBUG'); END; / -- 2. Enable DBMS_OUTPUT for interactive visibility SET SERVEROUTPUT ON SIZE UNLIMITED -- 3. Run the failing procedure BEGIN app_schema.debug_pkg.enable_debug; -- Conditional DBMS_OUTPUT app_schema.process_payment(1001, 500); app_schema.debug_pkg.disable_debug; END; / -- 4. If the error is not clear, check the structured log SELECT log_level, module, message, error_stack, created_at FROM app_schema.app_log WHERE module LIKE 'process_payment%' AND created_at > SYSTIMESTAMP - INTERVAL '5' MINUTE ORDER BY created_at; -- 5. If execution path is unclear, enable DBMS_TRACE BEGIN DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_exceptions); END; / BEGIN app_schema.process_payment(1001, 500); END; / BEGIN DBMS_TRACE.CLEAR_PLSQL_TRACE; END; / -- 6. Query trace results for exception events SELECT event_seq, event_unit, event_line, event_kind FROM sys.plsql_trace_events WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs) AND event_kind = 54 ORDER BY event_seq; -- 7. Restore production log level BEGIN app_schema.logger_pkg.set_level('WARN'); END; /
- DBMS_OUTPUT: quick interactive checks during development β requires connected client, not for production
- SQL Developer debugger: step-through complex logic with variable inspection β requires debug privileges and same-connection execution
- Structured logging: production debugging, batch monitoring, trigger/job debugging β works everywhere, persists across sessions
- Checkpoint assertions: input validation and state verification β always captured in error stack, no client dependency
- DBMS_TRACE: execution path analysis in development β 2-10x overhead, not for production
- DBMS_HPROF: performance profiling β identifies time hotspots at the subprogram level
| Tool | Best For | Overhead | Production Safe | Client Required | Limitations |
|---|---|---|---|---|---|
| DBMS_OUTPUT | Quick interactive debugging during development | Low (but PGA buffer grows silently) | No β memory leak risk, no client to read buffer | Yes β SET SERVEROUTPUT ON required | Output only after block completes; 1 MB default buffer; invisible without client enablement |
| SQL Developer Debugger | Step-through debugging with variable inspection and breakpoints | Moderate (debug symbols increase compiled object size) | No β requires debug privileges and debug-compiled code | Yes β same SQL Developer connection required | Cannot debug triggers, jobs, or external app sessions; breakpoints silently fail without privileges or debug compilation |
| Structured Logging (table) | Production debugging, batch monitoring, trigger/job debugging | Minimal (one INSERT per log entry via autonomous transaction) | Yes β designed for production use | No β persists in queryable table | Requires log table setup and cleanup; autonomous transaction has minor overhead per entry |
| Checkpoint Assertions | Input validation, state verification, narrowing failure scope | None (raises only when state is invalid) | Yes β serves as both debug aid and production guard | No β error propagates through standard error stack | Only surfaces problems at checkpoint boundaries; cannot trace line-by-line flow |
| DBMS_TRACE | Execution path analysis β which lines and branches executed | High (2-10x slowdown for trace_all_lines) | No β development and controlled investigation only | No β writes to server-side trace tables | Requires debug-compiled code; large output for complex procedures; needs SYS table access |
| DBMS_HPROF | Performance profiling β identifying time hotspots | Low-Moderate (subprogram-level sampling) | Limited β controlled investigation only | No β writes to server-side directory and profiler tables | Requires directory object setup; shows time, not logic errors; Oracle 11g+ only |
π― Key Takeaways
- DBMS_OUTPUT is a client-dependent, session-scoped buffer β without SET SERVEROUTPUT ON, every PUT_LINE is a silent no-op that still consumes PGA memory
- SQL Developer breakpoints require three things simultaneously: debug privileges, debug-compiled code, and the same database connection for execution β if any one is missing, breakpoints silently fail
- Structured logging with autonomous transactions is the only debugging tool that works for production, triggers, scheduler jobs, and external application sessions
- Exception-driven debugging with checkpoint assertions narrows failure scope without any client dependency and serves double duty as production input validation
- DBMS_TRACE captures execution flow automatically but has 2-10x overhead β use trace_all_exceptions for lighter-weight error path analysis
- DBMS_HPROF profiles execution time at the subprogram level β use it when the problem is performance, not correctness
- The SQL History panel (F8) in SQL Developer shows the actual SQL executed by EXECUTE IMMEDIATE β critical for diagnosing dynamic SQL errors like ORA-00942
β Common Mistakes to Avoid
Interview Questions on This Topic
- QWhy does DBMS_OUTPUT.PUT_LINE sometimes show no output, and how do you fix it?JuniorReveal
- QWhat are the prerequisites for SQL Developer breakpoints to work in PL/SQL debugging?Mid-levelReveal
- QWhy should structured logging use autonomous transactions, and what is the risk if it does not?Mid-levelReveal
- QHow do you debug a PL/SQL procedure that is called by a database trigger?SeniorReveal
- QWhat is the difference between DBMS_TRACE and DBMS_HPROF, and when would you use each?SeniorReveal
Frequently Asked Questions
Why does DBMS_OUTPUT.PUT_LINE show no output?
The client has not enabled DBMS_OUTPUT retrieval. In SQL*Plus, run SET SERVEROUTPUT ON SIZE UNLIMITED before executing the PL/SQL block. In SQL Developer, open View > DBMS Output, click the green plus icon, and select your connection. Without this step, PUT_LINE writes to a session-level PGA buffer that is never read β the buffer fills silently until it overflows with ORA-20000: ORU-10027. Additionally, output is flushed only after the entire PL/SQL block completes β you cannot see line-by-line output during execution.
Can I use DBMS_OUTPUT in production?
No. DBMS_OUTPUT consumes PGA memory per session and requires a connected client to retrieve the output. In production, batch sessions and scheduler jobs run without a connected client β the buffer accumulates silently and can grow to gigabytes for high-volume processing. This can cause ORA-04030 (out of process memory) errors that affect all sessions on the instance. Use a structured logging package that writes to a table with configurable log levels instead.
How do I find the trace file generated by DBMS_TRACE?
DBMS_TRACE writes to the SYS.PLSQL_TRACE_EVENTS and SYS.PLSQL_TRACE_RUNS tables (after running $ORACLE_HOME/rdbms/admin/tracetab.sql to create them). Query these tables directly: SELECT event_seq, event_unit, event_line, event_kind FROM sys.plsql_trace_events WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs) ORDER BY event_seq. For SQL trace files (enabled via ALTER SESSION SET SQL_TRACE = TRUE or DBMS_SESSION.SET_SQL_TRACE), the trace file location is: SELECT value FROM v$diag_info WHERE name = 'Diag Trace'. Use tkprof to format SQL trace files: tkprof <trace_file> <output_file> sys=no.
How do I debug a procedure that is called by another procedure?
Set a breakpoint in the inner procedure using SQL Developer's interactive debugger. Start debugging from the outer procedure (right-click > Debug). When the outer procedure calls the inner one, the debugger stops at the breakpoint in the inner procedure. Use Step Into (F7) to enter called procedures, Step Over (F8) to execute them without entering, and Step Out (Shift+F7) to finish the current procedure and return to the caller. Both procedures must be compiled with DEBUG symbols, and execution must happen in the same database connection the debugger is attached to.
What is the difference between trace_all_lines and trace_all_exceptions in DBMS_TRACE?
trace_all_lines records every PL/SQL line executed β complete execution visibility but large output and 2-10x performance overhead. trace_all_exceptions records only when exceptions are raised β much smaller output, significantly lower overhead, and focused exclusively on error paths. Use trace_all_lines in development when you need to understand which branches the code takes. Use trace_all_exceptions in staging or controlled investigation when you need to see which exceptions are raised without the full line-level overhead.
How do I make the structured logging package safe so it never crashes the calling procedure?
Two mechanisms are required. First, use PRAGMA AUTONOMOUS_TRANSACTION in the write_log procedure so the log INSERT is committed independently of the calling transaction β this also means a logging failure does not roll back the caller's work. Second, wrap the INSERT in a BEGIN/EXCEPTION block with a WHEN OTHERS handler that does ROLLBACK and returns silently. The logging package must never propagate an exception to the caller β a full tablespace, a constraint violation, or any other logging failure must be swallowed silently. The caller's business logic must succeed or fail on its own merits, not because the logging infrastructure had a problem.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.