Skip to content
Homeβ€Ί Databaseβ€Ί Debugging PL/SQL with DBMS_OUTPUT, Exceptions & SQL Developer

Debugging PL/SQL with DBMS_OUTPUT, Exceptions & SQL Developer

Where developers are forged. Β· Structured learning Β· Free forever.
πŸ“ Part of: PL/SQL β†’ Topic 24 of 27
Practical debugging guide that stops ORA-06512 errors before they happen.
βš™οΈ Intermediate β€” basic Database knowledge assumed
In this tutorial, you'll learn
Practical debugging guide that stops ORA-06512 errors before they happen.
  • DBMS_OUTPUT is a client-dependent, session-scoped buffer β€” without SET SERVEROUTPUT ON, every PUT_LINE is a silent no-op that still consumes PGA memory
  • SQL Developer breakpoints require three things simultaneously: debug privileges, debug-compiled code, and the same database connection for execution β€” if any one is missing, breakpoints silently fail
  • Structured logging with autonomous transactions is the only debugging tool that works for production, triggers, scheduler jobs, and external application sessions
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚑Quick Answer
  • DBMS_OUTPUT is the simplest PL/SQL debugger β€” but only works when the client explicitly enables buffer retrieval
  • SQL Developer's built-in debugger supports breakpoints, variable inspection, and step-through execution against a live database session
  • Exception-driven debugging uses strategic RAISE points to narrow failure scope without any client dependency
  • The debugger attaches to the database session β€” not the code file β€” so connection context determines whether breakpoints fire
  • Production insight: DBMS_OUTPUT buffers silently in PGA memory if the client never calls DBMS_OUTPUT.GET_LINES β€” the buffer grows until the session ends or ORA-20000 fires
  • Biggest mistake: leaving DBMS_OUTPUT.PUT_LINE calls in production code β€” they consume PGA memory on every execution and provide zero value without a connected client reading the buffer
Production IncidentDBMS_OUTPUT Debugging Left in Production Caused 40% Memory Increase on Shared ServerA developer left 47 DBMS_OUTPUT.PUT_LINE calls in a batch processing package. The output buffer consumed 2.3 GB of PGA memory per session during the nightly batch, causing ORA-04030 errors across the shared server pool.
SymptomAt 01:15 AM during the nightly batch window, the shared server pool began reporting ORA-04030: out of process memory errors. PGA memory usage per session spiked from a baseline of 50 MB to 2.3 GB. The batch process that normally completed in 45 minutes was still running at the 3-hour mark when the DBA team intervened. Application users connecting during early morning hours received ORA-04030 errors on unrelated queries because the PGA pool was exhausted.
AssumptionThe on-call team assumed a memory leak in the Oracle kernel or a misconfigured SGA/PGA parameter. They spent 6 hours analyzing AWR reports, adjusting PGA_AGGREGATE_TARGET, reviewing recent Oracle patches, and restarting the database instance twice. None of these actions resolved the issue because the root cause was in the application code, not the database configuration.
Root causeA developer had added 47 DBMS_OUTPUT.PUT_LINE calls during a debugging session two weeks earlier and forgot to remove them before merging to the release branch. Each PUT_LINE call buffered approximately 50 KB of debug text β€” variable dumps, row counts, and intermediate calculation results β€” in the session's PGA. The batch process executed the instrumented procedure once per transaction, and the nightly batch processed 10,000 transactions. With 47 calls per transaction, each session accumulated roughly 2.3 GB of DBMS_OUTPUT buffer. The shared server process held this buffer for the duration of the session. Multiple concurrent batch sessions exhausted the PGA pool, causing ORA-04030 for every session on the instance β€” including unrelated application users.
FixImmediate fix: removed all 47 DBMS_OUTPUT.PUT_LINE calls from the batch package and redeployed. PGA usage returned to baseline within one batch cycle. Permanent changes: 1. Added a pre-deployment CI check that greps all PL/SQL source files for DBMS_OUTPUT.PUT_LINE. Any occurrence outside the dedicated debug utility package blocks the deployment pipeline with a descriptive error message. 2. Replaced all ad-hoc DBMS_OUTPUT debugging with a structured logging package that writes to a log table using autonomous transactions. The logging package supports configurable log levels (DEBUG, INFO, WARN, ERROR) so debug output can be enabled per-session during incident response without code changes. 3. Added a code review checklist item: "Verify no DBMS_OUTPUT.PUT_LINE calls exist outside debug utilities."
Key Lesson
DBMS_OUTPUT.PUT_LINE buffers in PGA memory β€” 47 calls per transaction across 10,000 transactions accumulated 2.3 GB per sessionDBMS_OUTPUT in production is a memory leak waiting to happen β€” the buffer grows silently because no client retrieves it during batch executionUse a structured logging package with configurable log levels instead of ad-hoc DBMS_OUTPUT calls β€” it writes to a table, consumes negligible memory, and persists across sessionsPre-deployment checks should grep for DBMS_OUTPUT.PUT_LINE and block deployment if found outside dedicated debug utility packages
Production Debug GuideFrom symptom to root cause using the right debugging tool
DBMS_OUTPUT.PUT_LINE shows no output in SQL*Plus or SQL Developer→Enable DBMS_OUTPUT retrieval in the client. In SQL*Plus: SET SERVEROUTPUT ON SIZE UNLIMITED. In SQL Developer: open View > DBMS Output, click the green plus icon to add your connection, and ensure the panel is active. The output is buffered silently on the server until the client retrieves it — without enablement, every PUT_LINE is a silent no-op.
Procedure fails but the error stack does not identify the exact failing line→Add strategic DBMS_OUTPUT.PUT_LINE calls before and after each major operation inside the procedure. Re-run with SERVEROUTPUT ON and observe which message appears last — the failure occurs between the last visible message and the next one. For narrower isolation, add checkpoint RAISE_APPLICATION_ERROR calls with descriptive messages at each major boundary.
SQL Developer debugger does not stop at breakpoints→Verify three things: (1) DEBUG CONNECT SESSION and DEBUG ANY PROCEDURE privileges are granted to the debugging user. (2) The package is compiled with debug symbols: ALTER PACKAGE pkg_name COMPILE DEBUG. (3) You are executing the code in the same connection that the debugger is attached to — a different SQL Developer worksheet may use a different database connection.
Cannot reproduce the error in SQL Developer but it occurs consistently in the application→The application session likely uses different session settings. Check NLS parameters (NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS), optimizer mode, current_schema, and time zone. Match the application context in SQL Developer: ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD'; ALTER SESSION SET CURRENT_SCHEMA = app_owner; etc. Also verify that the application connects as a different database user with different grants and synonyms.
Need to debug a procedure called by a trigger, scheduler job, or another session — cannot run it manually→Triggers and scheduler jobs have no connected client to retrieve DBMS_OUTPUT. Use a structured logging package that writes to a table via autonomous transaction instead. Insert diagnostic messages with timestamps and context, then query the log table after the trigger or job completes. For scheduler jobs, also check DBA_SCHEDULER_JOB_RUN_DETAILS for the job's error output.
DBMS_OUTPUT buffer overflows with ORA-20000: ORU-10027 during a long-running procedure→The default buffer size is 20,000 bytes (Oracle 10g) or 1 MB. Increase it: DBMS_OUTPUT.ENABLE(buffer_size => NULL) for unlimited buffer in 10.2+, or SET SERVEROUTPUT ON SIZE UNLIMITED in SQL*Plus. However, increasing the buffer only delays the problem — for long-running procedures, switch to structured logging that writes to a table and does not accumulate in memory.

PL/SQL debugging requires tools that work inside the database execution context. Unlike application code where you attach a debugger to a running process on your machine, PL/SQL executes on the database server and returns results to the client. Every debugging tool must bridge this server-client boundary, and understanding how each tool bridges it determines whether your debugging session produces answers or wastes hours.

DBMS_OUTPUT is the most widely used PL/SQL debugging tool and the most misunderstood. Engineers add DBMS_OUTPUT.PUT_LINE calls, execute the procedure, see no output, and conclude the code is not executing. The output is buffered silently in the session's PGA memory until the client explicitly retrieves it. Without the client-side enablement step β€” SET SERVEROUTPUT ON in SQL*Plus or the DBMS Output panel in SQL Developer β€” every DBMS_OUTPUT.PUT_LINE call is a silent no-op that allocates memory and produces nothing visible.

SQL Developer provides an interactive debugger that supports breakpoints, variable inspection, call stack navigation, watch expressions, and step-through execution. It requires a direct database connection, specific debug privileges, and a DEBUG-compiled package. The debugger attaches to the database session, not the code file β€” understanding this distinction is critical for debugging procedures that are called by other procedures, triggers, or scheduler jobs.

Structured logging and exception-driven debugging are the production-grade alternatives. They persist diagnostic information in database tables, require no client connection, survive transaction rollbacks via autonomous transactions, and can be enabled or disabled at runtime through configurable log levels. This guide covers all four approaches, when to use each, and the mistakes that cost the most time.

DBMS_OUTPUT: The Simplest Debugger

DBMS_OUTPUT is Oracle's built-in print debugging mechanism. PUT_LINE writes text to a session-level buffer in PGA memory. The client retrieves the buffer contents via GET_LINES after the PL/SQL block completes. The critical misunderstanding is that DBMS_OUTPUT.PUT_LINE does not write to the screen β€” it writes to a server-side buffer that the client must explicitly read.

The client enablement step is mandatory. In SQL*Plus, run SET SERVEROUTPUT ON SIZE UNLIMITED before executing the PL/SQL block. In SQL Developer, open the DBMS Output panel (View > DBMS Output), click the green plus icon, and select your connection. Without this step, every DBMS_OUTPUT.PUT_LINE call is a silent no-op β€” the code runs, PGA memory is allocated for the buffer, but nothing is displayed.

The default buffer size varies by Oracle version but is limited. Exceeding the limit raises ORA-20000: ORU-10027: buffer overflow, which terminates the PL/SQL block. In Oracle 10.2 and later, calling DBMS_OUTPUT.ENABLE(buffer_size => NULL) removes the limit, but the buffer still accumulates in PGA memory. For long-running procedures that produce extensive output, the buffer can grow to hundreds of megabytes β€” this is the mechanism behind the production incident described earlier.

DBMS_OUTPUT output is available only after the PL/SQL block completes. You cannot see output line-by-line during execution β€” the entire buffer is flushed to the client at the end. This means you cannot use DBMS_OUTPUT for real-time progress monitoring during a long-running batch. For real-time visibility, use DBMS_APPLICATION_INFO.SET_MODULE and SET_ACTION, which update V$SESSION immediately and are visible to other sessions.

The conditional debug pattern wraps all DBMS_OUTPUT calls behind a boolean flag. When the flag is FALSE (the production default), the IF check short-circuits and PUT_LINE is never called β€” no buffer allocation, no PGA consumption. When the flag is TRUE (during debugging), the output flows normally. This pattern is the minimum viable safety net if DBMS_OUTPUT must exist in the codebase at all.

io/thecodeforge/debug/dbms_output_basics.sql Β· SQL
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
-- Step 1: Enable DBMS_OUTPUT in the client
-- SQL*Plus:
SET SERVEROUTPUT ON SIZE UNLIMITED

-- SQL Developer: View > DBMS Output > click green (+) > select connection

-- Step 2: Basic debugging with DBMS_OUTPUT
CREATE OR REPLACE PROCEDURE app_schema.test_procedure IS
  v_count NUMBER;
  v_name  VARCHAR2(100);
BEGIN
  DBMS_OUTPUT.PUT_LINE('=== test_procedure started ===');
  DBMS_OUTPUT.PUT_LINE('Timestamp: ' || TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS.FF3'));

  SELECT COUNT(*) INTO v_count
  FROM data_owner.accounts
  WHERE status = 'ACTIVE';
  DBMS_OUTPUT.PUT_LINE('Active accounts: ' || v_count);

  SELECT customer_name INTO v_name
  FROM data_owner.customers
  WHERE customer_id = 1001;
  DBMS_OUTPUT.PUT_LINE('Customer name: ' || v_name);

  DBMS_OUTPUT.PUT_LINE('=== test_procedure completed ===');
EXCEPTION
  WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE('ERROR: ' || SQLERRM);
    DBMS_OUTPUT.PUT_LINE('Backtrace: ' || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
    RAISE;
END test_procedure;
/

-- Step 3: Conditional debug output (avoids production overhead)
CREATE OR REPLACE PACKAGE app_schema.debug_pkg AS
  -- Session-level flag: FALSE in production, TRUE during debugging
  g_debug_enabled BOOLEAN := FALSE;

  PROCEDURE enable_debug;
  PROCEDURE disable_debug;
  PROCEDURE log(p_message IN VARCHAR2);
  PROCEDURE log(p_label IN VARCHAR2, p_value IN VARCHAR2);
  PROCEDURE log(p_label IN VARCHAR2, p_value IN NUMBER);
  PROCEDURE log(p_label IN VARCHAR2, p_value IN DATE);
END debug_pkg;
/

CREATE OR REPLACE PACKAGE BODY app_schema.debug_pkg AS

  PROCEDURE enable_debug IS
  BEGIN
    g_debug_enabled := TRUE;
    DBMS_OUTPUT.ENABLE(buffer_size => NULL);  -- Unlimited buffer
  END enable_debug;

  PROCEDURE disable_debug IS
  BEGIN
    g_debug_enabled := FALSE;
  END disable_debug;

  PROCEDURE log(p_message IN VARCHAR2) IS
  BEGIN
    IF g_debug_enabled THEN
      DBMS_OUTPUT.PUT_LINE(
        TO_CHAR(SYSTIMESTAMP, 'HH24:MI:SS.FF3') || ' | ' || p_message
      );
    END IF;
  END log;

  PROCEDURE log(p_label IN VARCHAR2, p_value IN VARCHAR2) IS
  BEGIN
    IF g_debug_enabled THEN
      DBMS_OUTPUT.PUT_LINE(
        TO_CHAR(SYSTIMESTAMP, 'HH24:MI:SS.FF3')
        || ' | ' || p_label || ' = ' || NVL(p_value, '<NULL>')
      );
    END IF;
  END log;

  PROCEDURE log(p_label IN VARCHAR2, p_value IN NUMBER) IS
  BEGIN
    log(p_label, TO_CHAR(p_value));
  END log;

  PROCEDURE log(p_label IN VARCHAR2, p_value IN DATE) IS
  BEGIN
    log(p_label, TO_CHAR(p_value, 'YYYY-MM-DD HH24:MI:SS'));
  END log;

END debug_pkg;
/

-- Usage: enable debug for a session, run the procedure, disable
BEGIN
  app_schema.debug_pkg.enable_debug;
  app_schema.test_procedure;
  app_schema.debug_pkg.disable_debug;
END;
/
Mental Model
DBMS_OUTPUT Buffer Model
DBMS_OUTPUT is a write-only buffer on the server that the client must explicitly read β€” it is not a print statement.
  • PUT_LINE writes to a session-level PGA buffer β€” it does not write to the screen or any file
  • The client must call GET_LINES (or enable SERVEROUTPUT) to retrieve the buffer contents after the PL/SQL block completes
  • Without client enablement, the buffer fills silently β€” ORA-20000: ORU-10027 when it overflows
  • Output is available only after the PL/SQL block completes β€” no real-time line-by-line visibility during execution
  • Buffer limit defaults vary by version; use DBMS_OUTPUT.ENABLE(NULL) or SET SERVEROUTPUT ON SIZE UNLIMITED for unlimited
  • DBMS_OUTPUT in production consumes PGA memory per session with zero debugging value β€” remove before deployment
πŸ“Š Production Insight
DBMS_OUTPUT.PUT_LINE buffers in PGA memory β€” it is not free and not harmless.
In a batch that processes thousands of transactions, the buffer can grow to gigabytes per session.
Rule: remove all DBMS_OUTPUT calls from production code β€” use a structured logging package that writes to a table instead.
🎯 Key Takeaway
DBMS_OUTPUT is a client-side debugging tool that consumes server-side PGA memory.
Without client enablement (SET SERVEROUTPUT ON), every PUT_LINE is a silent no-op that still allocates buffer space.
Bottom line: if your DBMS_OUTPUT shows no output, the client is not reading the buffer β€” enable it first, and remove it before shipping to production.

SQL Developer Interactive Debugger

SQL Developer provides a full interactive debugger for PL/SQL that supports breakpoints, variable inspection, call stack navigation, watch expressions, and step-through execution. The debugger attaches to a live database session and intercepts execution when a breakpoint is hit β€” similar to debugging in any IDE, but with the critical difference that execution happens on the database server, not locally.

The debugger requires two privileges granted to the debugging user: DEBUG CONNECT SESSION (to attach the debugger to the session) and DEBUG ANY PROCEDURE (to debug procedures owned by other schemas). Without these privileges, the debugger silently fails to attach and breakpoints are ignored without any error message. This is the single most common reason breakpoints do not fire.

The package or procedure must be compiled with debug symbols before breakpoints work. Run ALTER PACKAGE pkg_name COMPILE DEBUG BODY to include debug information in the compiled object. Without debug symbols, the debugger cannot map the execution point to a source line, and breakpoints are silently skipped. By default, packages are compiled without debug symbols β€” you must explicitly request them.

The debugger attaches to the database session, not the code file open in the editor. This means you must execute the code within the same connection that the debugger is monitoring. If you open the package in one SQL Developer worksheet and run the calling code in a different worksheet β€” which may be using a different database connection β€” the breakpoints will not fire because the execution is happening in a different session. Verify by checking the connection name in the worksheet tab.

For procedures invoked by triggers, scheduler jobs, or external applications, the SQL Developer debugger cannot attach directly because those executions happen in separate sessions that the IDE does not control. For those scenarios, use structured logging to a table or exception-driven debugging with checkpoint assertions.

After debugging, always recompile the package without debug symbols: ALTER PACKAGE pkg_name COMPILE BODY. Debug-compiled code carries additional metadata that increases the compiled object size and can marginally affect execution performance. Production code should never carry debug symbols.

io/thecodeforge/debug/sql_developer_setup.sql Β· SQL
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
-- Step 1: Grant debug privileges to the debugging user
-- Run as SYS or a DBA user
GRANT DEBUG CONNECT SESSION TO app_user;
GRANT DEBUG ANY PROCEDURE TO app_user;

-- Step 2: Compile the package body with debug symbols
ALTER PACKAGE app_schema.billing_pkg COMPILE DEBUG BODY;
-- Note: COMPILE DEBUG BODY compiles only the body with debug info
-- Use COMPILE DEBUG to compile both spec and body

-- Step 3: Verify the package is VALID after debug compilation
SELECT
  object_name,
  object_type,
  status,
  last_ddl_time
FROM user_objects
WHERE object_name = 'BILLING_PKG'
ORDER BY object_type;
-- Both PACKAGE and PACKAGE BODY should show VALID

-- Step 4: Verify debug info is present (Oracle 12c+)
SELECT
  owner,
  name,
  type,
  plsql_debug
FROM all_plsql_object_settings
WHERE name = 'BILLING_PKG'
  AND owner = 'APP_SCHEMA';
-- plsql_debug = TRUE indicates debug symbols are present

-- Step 5: SQL Developer debugger workflow
-- 1. Open the package body in SQL Developer's editor
-- 2. Click in the line number gutter to set a breakpoint (red dot appears)
-- 3. Right-click the procedure name and select "Debug" (or Ctrl+Shift+F10)
-- 4. Enter parameter values in the Debug PL/SQL dialog
-- 5. Click OK β€” execution starts and pauses at the first breakpoint
-- 6. Navigation controls:
--    Step Into  (F7) : enter the called procedure
--    Step Over  (F8) : execute the line without entering sub-calls
--    Step Out   (F9) : finish the current procedure and return to caller
--    Run to Cursor   : execute until the cursor position
--    Resume    (F9)  : continue to the next breakpoint
--    Terminate (Ctrl+F2) : stop debugging immediately
-- 7. Inspect variables in the Smart Data panel (auto-populated)
-- 8. Add watch expressions in the Watches panel for computed values
-- 9. View the call stack in the Stack panel to see the calling chain

-- Step 6: Recompile WITHOUT debug symbols after debugging
ALTER PACKAGE app_schema.billing_pkg COMPILE BODY;
-- Production code should never carry debug symbols

-- Step 7: Verify debug info is removed
SELECT
  owner,
  name,
  type,
  plsql_debug
FROM all_plsql_object_settings
WHERE name = 'BILLING_PKG'
  AND owner = 'APP_SCHEMA';
-- plsql_debug should now be FALSE
πŸ’‘SQL Developer Debugger Checklist
  • Grant DEBUG CONNECT SESSION and DEBUG ANY PROCEDURE to the debugging user β€” without these, breakpoints silently fail
  • Compile with ALTER PACKAGE pkg COMPILE DEBUG BODY before setting breakpoints β€” no debug symbols means no breakpoints
  • Execute the code in the SAME connection that the debugger is attached to β€” a different worksheet may use a different session
  • After debugging, recompile without DEBUG to remove debug overhead: ALTER PACKAGE pkg COMPILE BODY
  • For trigger/job/external-app debugging, use structured logging to a table β€” the IDE debugger cannot attach to those sessions
πŸ“Š Production Insight
SQL Developer breakpoints silently fail if any one of three conditions is not met: privileges, debug compilation, or connection context.
There is no error message when breakpoints are skipped β€” the code simply runs to completion without stopping.
Rule: before filing a bug report about broken breakpoints, verify all three conditions. The answer is almost always one of them.
🎯 Key Takeaway
SQL Developer's debugger requires three things simultaneously: debug privileges, DEBUG-compiled code, and the same connection for execution.
If any one of the three is missing, breakpoints are silently ignored β€” the code runs without stopping and you get no indication why.
Bottom line: if your breakpoints do not fire, check privileges, then debug compilation, then connection context β€” in that order.

Structured Logging: Production-Grade Debug Output

DBMS_OUTPUT is a development tool. It has no place in production code. Production debugging requires a structured logging package that writes to a persistent table with configurable log levels, timestamps, module identifiers, and session context.

The logging table captures everything DBMS_OUTPUT cannot: log level (DEBUG, INFO, WARN, ERROR), the module and procedure name, the message, the full error stack when applicable, the database session ID, the connected user, and a precise timestamp. Engineers query the table after the fact to trace execution flow β€” no client enablement required, no PGA memory consumed by an ever-growing buffer, no dependency on a connected client.

Log levels enable runtime filtering without code changes. In production, only WARN and ERROR are logged by default. During incident response, a DBA or on-call engineer can temporarily lower the threshold to DEBUG for a specific session by calling the set_level procedure. This captures detailed execution flow for the duration of the investigation without affecting other sessions and without redeploying code. When the investigation is complete, the level is restored to WARN.

The logging package must use an autonomous transaction for the INSERT. This is non-negotiable. If the calling transaction rolls back β€” which it will during error scenarios, the exact situations where you most need the log entry β€” a non-autonomous INSERT rolls back with it and the log entry is lost. PRAGMA AUTONOMOUS_TRANSACTION ensures the log entry is committed independently of the calling transaction.

Equally important: the logging package must handle its own exceptions internally. A failure inside the logger β€” a full tablespace, a constraint violation, any unexpected error β€” must never propagate to the calling procedure. The caller's business logic must succeed or fail on its own merits, not because the logging infrastructure had a problem. Wrap the INSERT in a BEGIN/EXCEPTION/END block that silently swallows any logging failure.

io/thecodeforge/logging/structured_logger.sql Β· SQL
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167
-- Logging table
CREATE TABLE app_schema.app_log (
  log_id        NUMBER         GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
  log_level     VARCHAR2(10)   NOT NULL,
  module        VARCHAR2(128)  NOT NULL,
  message       VARCHAR2(4000),
  error_stack   VARCHAR2(4000),
  session_id    NUMBER         DEFAULT SYS_CONTEXT('USERENV', 'SESSIONID'),
  db_user       VARCHAR2(128)  DEFAULT SYS_CONTEXT('USERENV', 'SESSION_USER'),
  os_user       VARCHAR2(128)  DEFAULT SYS_CONTEXT('USERENV', 'OS_USER'),
  created_at    TIMESTAMP      DEFAULT SYSTIMESTAMP
) TABLESPACE app_data;

CREATE INDEX idx_app_log_module  ON app_schema.app_log(module, created_at);
CREATE INDEX idx_app_log_level   ON app_schema.app_log(log_level, created_at);
CREATE INDEX idx_app_log_created ON app_schema.app_log(created_at);

-- Partition by day for easy cleanup (Oracle 12c+)
-- ALTER TABLE app_schema.app_log MODIFY PARTITION BY RANGE (created_at)
--   INTERVAL (NUMTODSINTERVAL(1, 'DAY'))
--   (PARTITION p_initial VALUES LESS THAN (TIMESTAMP '2026-01-01 00:00:00'));

-- Logging package
CREATE OR REPLACE PACKAGE app_schema.logger_pkg AS
  -- Log level constants
  c_debug CONSTANT VARCHAR2(10) := 'DEBUG';
  c_info  CONSTANT VARCHAR2(10) := 'INFO';
  c_warn  CONSTANT VARCHAR2(10) := 'WARN';
  c_error CONSTANT VARCHAR2(10) := 'ERROR';

  -- Convenience procedures
  PROCEDURE debug(p_module IN VARCHAR2, p_message IN VARCHAR2);
  PROCEDURE info (p_module IN VARCHAR2, p_message IN VARCHAR2);
  PROCEDURE warn (p_module IN VARCHAR2, p_message IN VARCHAR2);
  PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2);
  PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2, p_error_stack IN VARCHAR2);

  -- Runtime log level control (per session)
  PROCEDURE set_level(p_level IN VARCHAR2);
  FUNCTION  get_level RETURN VARCHAR2;
END logger_pkg;
/

CREATE OR REPLACE PACKAGE BODY app_schema.logger_pkg AS

  -- Session-level minimum log level β€” WARN by default in production
  g_min_level VARCHAR2(10) := c_warn;

  FUNCTION level_priority(p_level IN VARCHAR2) RETURN NUMBER IS
  BEGIN
    RETURN CASE UPPER(p_level)
      WHEN 'DEBUG' THEN 10
      WHEN 'INFO'  THEN 20
      WHEN 'WARN'  THEN 30
      WHEN 'ERROR' THEN 40
      ELSE 0
    END;
  END level_priority;

  -- Core writer β€” autonomous transaction, self-protecting
  PROCEDURE write_log(
    p_level       IN VARCHAR2,
    p_module      IN VARCHAR2,
    p_message     IN VARCHAR2,
    p_error_stack IN VARCHAR2 DEFAULT NULL
  ) IS
    PRAGMA AUTONOMOUS_TRANSACTION;
  BEGIN
    -- Skip if below session threshold
    IF level_priority(p_level) < level_priority(g_min_level) THEN
      RETURN;
    END IF;

    INSERT INTO app_schema.app_log (
      log_level, module, message, error_stack
    ) VALUES (
      p_level,
      SUBSTR(p_module, 1, 128),
      SUBSTR(p_message, 1, 4000),
      SUBSTR(p_error_stack, 1, 4000)
    );
    COMMIT;
  EXCEPTION
    WHEN OTHERS THEN
      -- Logging must NEVER crash the caller
      ROLLBACK;
  END write_log;

  PROCEDURE debug(p_module IN VARCHAR2, p_message IN VARCHAR2) IS
  BEGIN write_log(c_debug, p_module, p_message); END;

  PROCEDURE info(p_module IN VARCHAR2, p_message IN VARCHAR2) IS
  BEGIN write_log(c_info, p_module, p_message); END;

  PROCEDURE warn(p_module IN VARCHAR2, p_message IN VARCHAR2) IS
  BEGIN write_log(c_warn, p_module, p_message); END;

  PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2) IS
  BEGIN write_log(c_error, p_module, p_message); END;

  PROCEDURE error(p_module IN VARCHAR2, p_message IN VARCHAR2, p_error_stack IN VARCHAR2) IS
  BEGIN write_log(c_error, p_module, p_message, p_error_stack); END;

  PROCEDURE set_level(p_level IN VARCHAR2) IS
  BEGIN
    g_min_level := UPPER(p_level);
  END set_level;

  FUNCTION get_level RETURN VARCHAR2 IS
  BEGIN
    RETURN g_min_level;
  END get_level;

END logger_pkg;
/

-- Usage in application code
CREATE OR REPLACE PROCEDURE app_schema.process_batch(
  p_batch_id IN NUMBER
) IS
  v_count NUMBER;
BEGIN
  app_schema.logger_pkg.info('process_batch', 'Starting batch ' || p_batch_id);

  SELECT COUNT(*) INTO v_count
  FROM data_owner.orders
  WHERE batch_id = p_batch_id AND status = 'PENDING';

  app_schema.logger_pkg.debug('process_batch', 'Pending orders: ' || v_count);

  -- ... batch processing logic ...

  app_schema.logger_pkg.info('process_batch', 'Completed batch ' || p_batch_id || ', processed ' || v_count || ' orders');
EXCEPTION
  WHEN OTHERS THEN
    app_schema.logger_pkg.error(
      'process_batch',
      'Failed batch ' || p_batch_id,
      DBMS_UTILITY.FORMAT_ERROR_STACK || CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
    );
    RAISE;
END process_batch;
/

-- Enable DEBUG logging for incident investigation
BEGIN
  app_schema.logger_pkg.set_level('DEBUG');
END;
/

-- Query the log table during or after investigation
SELECT log_level, module, message, created_at
FROM app_schema.app_log
WHERE module = 'process_batch'
  AND created_at > SYSTIMESTAMP - INTERVAL '1' HOUR
ORDER BY created_at;

-- Restore WARN level after investigation
BEGIN
  app_schema.logger_pkg.set_level('WARN');
END;
/

-- Cleanup: purge old log entries (run daily via scheduler)
DELETE FROM app_schema.app_log
WHERE created_at < SYSTIMESTAMP - INTERVAL '30' DAY;
COMMIT;
Mental Model
DBMS_OUTPUT vs. Structured Logging
DBMS_OUTPUT is a development convenience. Structured logging is production infrastructure. They solve different problems.
  • DBMS_OUTPUT: client-dependent, session-scoped, PGA-resident, lost on disconnect, invisible to other sessions
  • Structured logging: client-independent, persistent in a table, committed via autonomous transaction, queryable by anyone with SELECT
  • DBMS_OUTPUT disappears when the session ends β€” structured logging persists until explicitly purged
  • Autonomous transactions ensure log entries survive caller rollback β€” critical for error logging in failed transactions
  • Log levels avoid production overhead β€” DEBUG is off by default, enabled per-session only during incident response
πŸ“Š Production Insight
DBMS_OUTPUT disappears when the session ends β€” if the session crashes, you lose every trace message.
Structured logging persists in a queryable table and survives both session crashes and transaction rollbacks via autonomous transactions.
Rule: replace all DBMS_OUTPUT with structured logging before deployment β€” DBMS_OUTPUT is a development tool, not a production diagnostic mechanism.
🎯 Key Takeaway
Structured logging replaces DBMS_OUTPUT for production debugging β€” persistent, queryable, filterable, and independent of client connections.
Log levels avoid performance overhead in production β€” DEBUG is off by default, enabled only during active incident investigation.
Bottom line: if your production debugging strategy relies on DBMS_OUTPUT, you are using a development tool in a production environment and losing diagnostic data every time a session ends.

Exception-Driven Debugging: Strategic RAISE Points

Exception-driven debugging uses strategic RAISE_APPLICATION_ERROR calls at key checkpoints to narrow the scope of a failure. Instead of scattering DBMS_OUTPUT calls everywhere and hoping you see the output, you embed assertions that fail loudly with diagnostic context whenever the execution state diverges from what the code expects.

The pattern is simple: at each major checkpoint in the procedure, validate the expected state. If the state is unexpected β€” a NULL parameter, a wrong status, a row count that should be 1 but is 0 or 5 β€” raise a descriptive error that includes the checkpoint identifier, the expected value, and the actual value. When the error propagates through the PL/SQL call stack, each ORA-06512 entry in the stack trace tells you exactly which procedure and line the error passed through, and the RAISE_APPLICATION_ERROR message tells you exactly what was wrong.

This approach has three advantages over DBMS_OUTPUT debugging. First, the error message is always captured β€” in the application's error log, in the database alert log, in the client's error handler. No client enablement is required. Second, the checkpoint errors integrate naturally with the ORA-06512 stack trace, providing a breadcrumb trail through the call chain. Third, checkpoints serve double duty: they are debugging aids during development and input validation guards in production. A well-placed checkpoint catches bad data before it corrupts downstream state.

The checkpoints should be placed at four boundaries: input validation (are the parameters non-null and within valid ranges?), pre-query state (does the prerequisite data exist in the expected state?), post-query state (did the query return the expected number and type of results?), and post-action verification (did the DML affect the expected number of rows?).

Use RAISE_APPLICATION_ERROR with the TRUE flag as the third parameter to preserve the original error stack when re-raising inside an exception handler. Without TRUE, the original stack is discarded and replaced with the location of the RAISE_APPLICATION_ERROR call, which destroys diagnostic information.

io/thecodeforge/debug/exception_driven_debug.sql Β· SQL
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
-- Exception-driven debugging with strategic checkpoints
CREATE OR REPLACE PROCEDURE app_schema.process_payment(
  p_account_id IN NUMBER,
  p_amount     IN NUMBER
) IS
  v_balance   NUMBER;
  v_status    VARCHAR2(20);
  v_rowcount  NUMBER;
BEGIN
  -- Checkpoint 1: Input validation
  IF p_account_id IS NULL THEN
    RAISE_APPLICATION_ERROR(
      -20001,
      'CP1 FAILED: p_account_id is NULL'
    );
  END IF;

  IF p_amount IS NULL OR p_amount <= 0 THEN
    RAISE_APPLICATION_ERROR(
      -20001,
      'CP1 FAILED: p_amount=' || NVL(TO_CHAR(p_amount), 'NULL')
      || ' (expected positive number)'
    );
  END IF;

  -- Checkpoint 2: Account exists and is in the expected state
  BEGIN
    SELECT balance, status
    INTO v_balance, v_status
    FROM data_owner.accounts
    WHERE account_id = p_account_id;
  EXCEPTION
    WHEN NO_DATA_FOUND THEN
      RAISE_APPLICATION_ERROR(
        -20002,
        'CP2 FAILED: account_id=' || p_account_id || ' not found in accounts table'
      );
  END;

  IF v_status != 'ACTIVE' THEN
    RAISE_APPLICATION_ERROR(
      -20002,
      'CP2 FAILED: account ' || p_account_id
      || ' status=' || v_status || ' (expected ACTIVE)'
    );
  END IF;

  -- Checkpoint 3: Sufficient balance for the operation
  IF v_balance < p_amount THEN
    RAISE_APPLICATION_ERROR(
      -20003,
      'CP3 FAILED: account ' || p_account_id
      || ' balance=' || v_balance
      || ' < requested amount=' || p_amount
    );
  END IF;

  -- Checkpoint 4: DML affected exactly one row
  UPDATE data_owner.accounts
  SET balance = balance - p_amount,
      last_updated = SYSTIMESTAMP
  WHERE account_id = p_account_id
    AND status = 'ACTIVE';

  v_rowcount := SQL%ROWCOUNT;

  IF v_rowcount != 1 THEN
    RAISE_APPLICATION_ERROR(
      -20004,
      'CP4 FAILED: expected 1 row updated, got ' || v_rowcount
      || ' for account ' || p_account_id
      || ' (possible concurrent modification or status change between SELECT and UPDATE)'
    );
  END IF;

  -- Log success
  app_schema.logger_pkg.info(
    'process_payment',
    'Payment processed: account=' || p_account_id
    || ', amount=' || p_amount
    || ', new_balance=' || (v_balance - p_amount)
  );

EXCEPTION
  WHEN OTHERS THEN
    -- Log the error with full stack, then re-raise
    app_schema.logger_pkg.error(
      'process_payment',
      'Failed: account=' || p_account_id || ', amount=' || p_amount
      || ' | ' || SQLERRM,
      DBMS_UTILITY.FORMAT_ERROR_STACK || CHR(10) || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
    );
    RAISE;  -- Preserve the original error for the caller
END process_payment;
/
Mental Model
Checkpoint Debugging Model
Checkpoints are assertions that fail loudly with diagnostic context β€” they tell you exactly where the execution state diverged from expectations.
  • CP1 (input): validate parameters are non-null and within valid ranges before any database operation
  • CP2 (pre-query): verify prerequisite data exists and is in the expected state before acting on it
  • CP3 (post-query): verify query results match expectations before using them in downstream logic
  • CP4 (post-action): verify DML affected the expected number of rows β€” catches concurrent modifications and race conditions
  • Each checkpoint error includes expected value, actual value, and full context β€” the ORA-06512 stack becomes a diagnostic breadcrumb trail
πŸ“Š Production Insight
Exception-driven debugging is strictly more reliable than DBMS_OUTPUT because the error is always captured β€” in the application log, the database alert log, and the ORA-06512 stack trace.
Checkpoint errors propagate through the call stack with full context β€” no client enablement required, no buffer to read, no output that disappears when the session ends.
Rule: add checkpoints at input, pre-query, post-query, and post-action boundaries β€” each one narrows the failure scope by 75%.
🎯 Key Takeaway
Exception-driven debugging uses strategic RAISE_APPLICATION_ERROR calls at checkpoints to narrow failure scope.
Each checkpoint includes expected vs. actual values and full context β€” the ORA-06512 stack becomes a diagnostic trail.
Bottom line: if your debugging strategy is DBMS_OUTPUT everywhere, switch to checkpoint assertions β€” they are always captured, never silent, and serve as input validation guards in production.

Tracing Execution with DBMS_TRACE and DBMS_HPROF

DBMS_TRACE is Oracle's built-in execution tracing facility for PL/SQL. It records every line executed, every exception raised, and every call/return event. The trace output is written to the PLSQL_TRACE_EVENTS table (after running the tracetab.sql setup script) or to a server-side trace file, depending on the Oracle version and configuration. No code changes are required β€” you enable tracing, execute the code, and analyze the results.

DBMS_TRACE is more powerful than DBMS_OUTPUT because it captures execution flow automatically. Enable tracing before executing the PL/SQL block, and every line executed is recorded. This is invaluable for understanding execution flow in complex procedures with many conditional branches where you are not sure which path the code takes.

DBMS_TRACE has two primary modes: trace_all_lines records every line executed (complete visibility but large output and significant overhead), and trace_all_exceptions records only when exceptions are raised (smaller output, lower overhead, focused on error paths). For most debugging scenarios, trace_all_exceptions is sufficient and far less expensive.

DBMS_HPROF (Hierarchical Profiler), available since Oracle 11g, provides execution time profiling. It records the time spent in each PL/SQL function and procedure, identifies hotspots, and generates a call tree. This is the right tool when the problem is performance β€” a procedure that runs but takes too long β€” rather than correctness β€” a procedure that raises an error.

The overhead of DBMS_TRACE with trace_all_lines is significant β€” typically 2x to 10x slowdown depending on the procedure's complexity. DBMS_HPROF has lower overhead because it samples at the subprogram level, not the line level. Neither should be enabled in production outside a controlled maintenance window or on a targeted individual session.

io/thecodeforge/debug/dbms_trace_setup.sql Β· SQL
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104
-- Prerequisites: run the trace table setup script (once per database)
-- @$ORACLE_HOME/rdbms/admin/tracetab.sql

-- Step 1: Compile the target package with debug symbols (required for line-level trace)
ALTER PACKAGE app_schema.billing_pkg COMPILE DEBUG BODY;

-- Step 2: Enable trace β€” all lines executed
BEGIN
  DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_lines);
END;
/

-- Step 3: Execute the procedure to trace
BEGIN
  app_schema.billing_pkg.process_invoice(1001);
END;
/

-- Step 4: Disable tracing
BEGIN
  DBMS_TRACE.CLEAR_PLSQL_TRACE;
END;
/

-- Step 5: Query the trace results
SELECT
  event_seq,
  event_unit_owner,
  event_unit,
  event_line,
  event_kind,
  event_time
FROM sys.plsql_trace_events
WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs)
ORDER BY event_seq;
-- event_kind: 38 = line executed, 54 = exception raised, 51 = call, 52 = return

-- Step 6: Trace exceptions only (lighter weight)
BEGIN
  DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_exceptions);
END;
/

BEGIN
  app_schema.billing_pkg.process_invoice(1001);
END;
/

BEGIN
  DBMS_TRACE.CLEAR_PLSQL_TRACE;
END;
/

-- Query only exception events
SELECT
  event_seq,
  event_unit_owner || '.' || event_unit AS unit,
  event_line,
  event_kind
FROM sys.plsql_trace_events
WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs)
  AND event_kind = 54  -- Exception raised
ORDER BY event_seq;

-- Step 7: DBMS_HPROF β€” Hierarchical Profiler (Oracle 11g+)
-- Profile a procedure to find performance hotspots
DECLARE
  v_runid NUMBER;
BEGIN
  DBMS_HPROF.START_PROFILING('PLSHPROF_DIR', 'billing_profile.trc');
  app_schema.billing_pkg.process_invoice(1001);
  DBMS_HPROF.STOP_PROFILING;

  -- Analyze the profile data into reporting tables
  v_runid := DBMS_HPROF.ANALYZE(
    location => 'PLSHPROF_DIR',
    filename => 'billing_profile.trc'
  );

  DBMS_OUTPUT.PUT_LINE('Profile run ID: ' || v_runid);
END;
/

-- Query the profiler results
SELECT
  owner,
  module,
  function AS subprogram,
  line#,
  subtree_elapsed_time AS total_us,
  function_elapsed_time AS self_us,
  calls
FROM dbmshp_function_info
WHERE runid = (SELECT MAX(runid) FROM dbmshp_runs)
ORDER BY function_elapsed_time DESC;
-- Shows which subprograms consume the most time

-- Step 8: Recompile without debug symbols after tracing
ALTER PACKAGE app_schema.billing_pkg COMPILE BODY;

-- Step 9: Clean up old trace data
DELETE FROM sys.plsql_trace_events WHERE runid < (SELECT MAX(runid) - 5 FROM sys.plsql_trace_runs);
DELETE FROM sys.plsql_trace_runs WHERE runid < (SELECT MAX(runid) - 5 FROM sys.plsql_trace_runs);
COMMIT;
⚠ DBMS_TRACE and DBMS_HPROF Overhead Warning
πŸ“Š Production Insight
DBMS_TRACE captures every line executed without code changes β€” but the overhead is 2x to 10x and the output can be massive.
For production debugging, use trace_all_exceptions to capture only error paths with minimal overhead.
Rule: use DBMS_TRACE and DBMS_HPROF for development and controlled investigation only β€” use structured logging with checkpoints for production.
🎯 Key Takeaway
DBMS_TRACE captures execution flow automatically without code changes β€” trace_all_exceptions is the practical mode for most debugging.
DBMS_HPROF profiles execution time at the subprogram level β€” use it when the problem is performance, not correctness.
Bottom line: if you need execution flow visibility in production, use structured logging with checkpoints β€” not DBMS_TRACE, which carries too much overhead for sustained use.

Debugging Workflow: Choosing the Right Tool

Each PL/SQL debugging tool has a specific use case. Using the wrong tool wastes time β€” either because the tool cannot surface the information you need, or because the overhead is disproportionate to the problem.

DBMS_OUTPUT is appropriate for quick, interactive debugging in a development environment where you are connected via SQL*Plus or SQL Developer and can see the output immediately after execution. It is inappropriate for production, for batch processes that run without a connected client, and for any procedure that processes more than a handful of transactions.

The SQL Developer interactive debugger is appropriate for step-through debugging of complex logic where you need to inspect variable values at specific points in execution. It requires a direct connection, debug privileges, and debug-compiled code. It is inappropriate for debugging triggers, scheduler jobs, or procedures called by external applications that use their own sessions.

Structured logging is appropriate for production debugging, batch process monitoring, and any scenario where the debugging session must survive session termination, transaction rollback, or client disconnection. It is the only tool that works for triggers, scheduler jobs, and external application calls. The overhead is minimal β€” a single INSERT per log entry via autonomous transaction.

Exception-driven debugging with checkpoints is appropriate for input validation, state verification, and narrowing failure scope in procedures that process data through multiple stages. Checkpoints serve double duty as debugging aids and production guards. They are always captured in the error stack and require no client enablement.

DBMS_TRACE and DBMS_HPROF are appropriate for detailed execution flow analysis and performance profiling in development or controlled investigation environments. They are inappropriate for production due to overhead.

The decision tree: Can you reproduce the issue interactively? Use the SQL Developer debugger. Cannot reproduce but can run in development? Use DBMS_TRACE. Issue is in production and you cannot reproduce it? Use structured logging. Need to verify data state at specific points? Use checkpoint assertions. Quick check during development? DBMS_OUTPUT is fine.

io/thecodeforge/debug/debug_decision_tree.sql Β· SQL
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
-- Decision Tree: Which debugging tool to use
--
-- Q1: Is the issue reproducible interactively in SQL Developer?
--   YES -> Use SQL Developer interactive debugger (breakpoints + variable inspection)
--   NO  -> Go to Q2
--
-- Q2: Is the issue in production or triggered by a trigger/job/external app?
--   YES -> Use structured logging (logger_pkg) + checkpoint assertions
--   NO  -> Go to Q3
--
-- Q3: Do you need to understand the execution path through branches?
--   YES -> Use DBMS_TRACE (trace_all_lines in dev, trace_all_exceptions in staging)
--   NO  -> Go to Q4
--
-- Q4: Is the problem performance (slow) rather than correctness (wrong/error)?
--   YES -> Use DBMS_HPROF for subprogram-level time profiling
--   NO  -> Go to Q5
--
-- Q5: Quick check during development with a connected client?
--   YES -> DBMS_OUTPUT with conditional debug flag
--   NO  -> Use structured logging (it works everywhere)

-- Example: combining tools for a complex debugging session

-- 1. Enable structured logging at DEBUG level for the session
BEGIN
  app_schema.logger_pkg.set_level('DEBUG');
END;
/

-- 2. Enable DBMS_OUTPUT for interactive visibility
SET SERVEROUTPUT ON SIZE UNLIMITED

-- 3. Run the failing procedure
BEGIN
  app_schema.debug_pkg.enable_debug;  -- Conditional DBMS_OUTPUT
  app_schema.process_payment(1001, 500);
  app_schema.debug_pkg.disable_debug;
END;
/

-- 4. If the error is not clear, check the structured log
SELECT log_level, module, message, error_stack, created_at
FROM app_schema.app_log
WHERE module LIKE 'process_payment%'
  AND created_at > SYSTIMESTAMP - INTERVAL '5' MINUTE
ORDER BY created_at;

-- 5. If execution path is unclear, enable DBMS_TRACE
BEGIN
  DBMS_TRACE.SET_PLSQL_TRACE(DBMS_TRACE.trace_all_exceptions);
END;
/

BEGIN
  app_schema.process_payment(1001, 500);
END;
/

BEGIN
  DBMS_TRACE.CLEAR_PLSQL_TRACE;
END;
/

-- 6. Query trace results for exception events
SELECT event_seq, event_unit, event_line, event_kind
FROM sys.plsql_trace_events
WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs)
  AND event_kind = 54
ORDER BY event_seq;

-- 7. Restore production log level
BEGIN
  app_schema.logger_pkg.set_level('WARN');
END;
/
Mental Model
Debugging Tool Selection Matrix
Each tool has a specific use case β€” using the wrong one wastes time or produces no useful information.
  • DBMS_OUTPUT: quick interactive checks during development β€” requires connected client, not for production
  • SQL Developer debugger: step-through complex logic with variable inspection β€” requires debug privileges and same-connection execution
  • Structured logging: production debugging, batch monitoring, trigger/job debugging β€” works everywhere, persists across sessions
  • Checkpoint assertions: input validation and state verification β€” always captured in error stack, no client dependency
  • DBMS_TRACE: execution path analysis in development β€” 2-10x overhead, not for production
  • DBMS_HPROF: performance profiling β€” identifies time hotspots at the subprogram level
πŸ“Š Production Insight
The most common debugging mistake is using the wrong tool for the context.
DBMS_OUTPUT in production wastes memory. The SQL Developer debugger on a trigger wastes time. DBMS_TRACE in production degrades performance.
Rule: match the tool to the context β€” structured logging and checkpoint assertions cover 90% of production debugging needs.
🎯 Key Takeaway
Each debugging tool has a specific context where it excels and contexts where it fails or wastes time.
Structured logging and checkpoint assertions cover 90% of production debugging needs without overhead or client dependency.
Bottom line: if you are reaching for DBMS_OUTPUT or the SQL Developer debugger for a production issue, stop and use structured logging instead β€” it works everywhere, persists across sessions, and requires no client connection.
πŸ—‚ PL/SQL Debugging Tools Comparison
When to use each tool and the trade-offs involved
ToolBest ForOverheadProduction SafeClient RequiredLimitations
DBMS_OUTPUTQuick interactive debugging during developmentLow (but PGA buffer grows silently)No β€” memory leak risk, no client to read bufferYes β€” SET SERVEROUTPUT ON requiredOutput only after block completes; 1 MB default buffer; invisible without client enablement
SQL Developer DebuggerStep-through debugging with variable inspection and breakpointsModerate (debug symbols increase compiled object size)No β€” requires debug privileges and debug-compiled codeYes β€” same SQL Developer connection requiredCannot debug triggers, jobs, or external app sessions; breakpoints silently fail without privileges or debug compilation
Structured Logging (table)Production debugging, batch monitoring, trigger/job debuggingMinimal (one INSERT per log entry via autonomous transaction)Yes β€” designed for production useNo β€” persists in queryable tableRequires log table setup and cleanup; autonomous transaction has minor overhead per entry
Checkpoint AssertionsInput validation, state verification, narrowing failure scopeNone (raises only when state is invalid)Yes β€” serves as both debug aid and production guardNo β€” error propagates through standard error stackOnly surfaces problems at checkpoint boundaries; cannot trace line-by-line flow
DBMS_TRACEExecution path analysis β€” which lines and branches executedHigh (2-10x slowdown for trace_all_lines)No β€” development and controlled investigation onlyNo β€” writes to server-side trace tablesRequires debug-compiled code; large output for complex procedures; needs SYS table access
DBMS_HPROFPerformance profiling β€” identifying time hotspotsLow-Moderate (subprogram-level sampling)Limited β€” controlled investigation onlyNo β€” writes to server-side directory and profiler tablesRequires directory object setup; shows time, not logic errors; Oracle 11g+ only

🎯 Key Takeaways

  • DBMS_OUTPUT is a client-dependent, session-scoped buffer β€” without SET SERVEROUTPUT ON, every PUT_LINE is a silent no-op that still consumes PGA memory
  • SQL Developer breakpoints require three things simultaneously: debug privileges, debug-compiled code, and the same database connection for execution β€” if any one is missing, breakpoints silently fail
  • Structured logging with autonomous transactions is the only debugging tool that works for production, triggers, scheduler jobs, and external application sessions
  • Exception-driven debugging with checkpoint assertions narrows failure scope without any client dependency and serves double duty as production input validation
  • DBMS_TRACE captures execution flow automatically but has 2-10x overhead β€” use trace_all_exceptions for lighter-weight error path analysis
  • DBMS_HPROF profiles execution time at the subprogram level β€” use it when the problem is performance, not correctness
  • The SQL History panel (F8) in SQL Developer shows the actual SQL executed by EXECUTE IMMEDIATE β€” critical for diagnosing dynamic SQL errors like ORA-00942

⚠ Common Mistakes to Avoid

    βœ•Leaving DBMS_OUTPUT.PUT_LINE calls in production code
    Symptom

    PGA memory usage per session grows steadily during batch processing. In extreme cases, ORA-04030 (out of process memory) errors appear for all sessions on the instance. The DBMS_OUTPUT buffer fills silently because no client is connected to retrieve it during automated batch execution.

    Fix

    Remove all DBMS_OUTPUT.PUT_LINE calls from production code. Add a pre-deployment CI check that greps for DBMS_OUTPUT.PUT_LINE and blocks the pipeline if found outside dedicated debug utility packages. Replace ad-hoc output with a structured logging package that writes to a table.

    βœ•Assuming DBMS_OUTPUT is enabled by default
    Symptom

    Engineer adds PUT_LINE calls, runs the procedure, sees no output, and concludes the code path is not being executed. Hours are spent investigating a "dead code" path that is actually executing correctly β€” the output is buffered but never retrieved.

    Fix

    Always enable DBMS_OUTPUT retrieval before running debug code: SET SERVEROUTPUT ON SIZE UNLIMITED in SQL*Plus, or enable the DBMS Output panel in SQL Developer. If no output appears after enablement, check that the PL/SQL block completes without error β€” output is flushed only after the block finishes.

    βœ•Setting breakpoints in SQL Developer without compiling the package with DEBUG
    Symptom

    Breakpoints appear as red dots in the IDE but execution never stops at them. The procedure runs to completion without pausing. No error message indicates why the breakpoints were skipped.

    Fix

    Compile the package body with debug symbols: ALTER PACKAGE pkg_name COMPILE DEBUG BODY. Verify debug compilation with SELECT plsql_debug FROM all_plsql_object_settings WHERE name = 'PKG_NAME'. After debugging, recompile without DEBUG: ALTER PACKAGE pkg_name COMPILE BODY.

    βœ•Using DBMS_OUTPUT for debugging triggers and scheduler jobs
    Symptom

    DBMS_OUTPUT calls inside a trigger or scheduler job produce no visible output anywhere. The engineer concludes the trigger is not firing or the job is not running. In reality, both are executing β€” but there is no connected client to retrieve the DBMS_OUTPUT buffer.

    Fix

    Triggers and scheduler jobs execute in autonomous sessions with no connected client. Use a structured logging package that writes to a table via autonomous transaction. Query the log table after the trigger fires or the job completes. For scheduler jobs, also check DBA_SCHEDULER_JOB_RUN_DETAILS for job-level error output.

    βœ•Catching exceptions in WHEN OTHERS without logging or re-raising
    Symptom

    A procedure silently swallows errors. The calling code receives no error indication. The problem is invisible until downstream data corruption or missing data is discovered hours or days later.

    Fix

    Every WHEN OTHERS handler must either log the error and re-raise it, or log the error and raise a new descriptive error. Never swallow exceptions silently. Use the structured logging package to capture SQLERRM, FORMAT_ERROR_STACK, and FORMAT_ERROR_BACKTRACE before re-raising.

    βœ•Forgetting to remove debug compilation from production packages
    Symptom

    Packages remain compiled with DEBUG symbols in production. The compiled objects are larger than necessary, and in some Oracle versions, debug-compiled code may exhibit marginally different performance characteristics due to additional metadata.

    Fix

    After every debugging session, recompile without DEBUG: ALTER PACKAGE pkg_name COMPILE BODY. Add a monitoring query to the daily health check: SELECT name, plsql_debug FROM all_plsql_object_settings WHERE plsql_debug = 'TRUE' AND owner IN (application schemas). Alert if any production package is debug-compiled.

Interview Questions on This Topic

  • QWhy does DBMS_OUTPUT.PUT_LINE sometimes show no output, and how do you fix it?JuniorReveal
    DBMS_OUTPUT.PUT_LINE writes to a server-side PGA buffer, not to the screen. The client must explicitly enable buffer retrieval: SET SERVEROUTPUT ON SIZE UNLIMITED in SQL*Plus, or the DBMS Output panel in SQL Developer. Without enablement, every PUT_LINE is a silent no-op β€” the buffer fills but is never read. Additionally, output is flushed only after the PL/SQL block completes β€” if the block raises an unhandled exception and the client does not retrieve partial output, nothing is displayed. The fix is always to enable output retrieval before execution and check for unhandled exceptions that terminate the block.
  • QWhat are the prerequisites for SQL Developer breakpoints to work in PL/SQL debugging?Mid-levelReveal
    Three prerequisites must all be met simultaneously. First, the debugging user must have DEBUG CONNECT SESSION and DEBUG ANY PROCEDURE privileges granted directly (not through a role). Second, the package body must be compiled with debug symbols: ALTER PACKAGE pkg COMPILE DEBUG BODY. Third, the code must be executed in the same database connection that the debugger is attached to β€” a different SQL Developer worksheet may use a different connection and the breakpoints will not fire. If any one of these is missing, breakpoints are silently ignored with no error message.
  • QWhy should structured logging use autonomous transactions, and what is the risk if it does not?Mid-levelReveal
    Structured logging uses autonomous transactions (PRAGMA AUTONOMOUS_TRANSACTION) so that log entries are committed independently of the calling transaction. Without an autonomous transaction, if the calling procedure raises an error and the transaction rolls back, the log entry rolls back with it β€” you lose the diagnostic information at the exact moment you need it most. With an autonomous transaction, the INSERT into the log table is committed in its own transaction scope, so the log entry persists even when the caller's transaction rolls back. The trade-off is minor: each autonomous transaction has a small overhead for the context switch, but this is negligible compared to the value of preserved error logs.
  • QHow do you debug a PL/SQL procedure that is called by a database trigger?SeniorReveal
    The SQL Developer interactive debugger cannot attach to trigger execution because triggers fire in the session that caused the triggering DML, and you cannot set breakpoints in a trigger's session from the IDE. Instead, use a structured logging package that writes to a table via autonomous transaction. Insert diagnostic messages with timestamps, the triggering operation (:OLD and :NEW values), and context information inside the trigger body. After the trigger fires, query the log table to trace the execution flow. Alternatively, use exception-driven debugging with checkpoint assertions that raise descriptive errors when the trigger encounters an unexpected state β€” these errors propagate to the session that caused the trigger to fire and appear in the client's error output.
  • QWhat is the difference between DBMS_TRACE and DBMS_HPROF, and when would you use each?SeniorReveal
    DBMS_TRACE captures execution flow β€” which lines were executed, which exceptions were raised, which calls and returns occurred. It answers the question 'what path did the code take?' and is used for correctness debugging. DBMS_HPROF (Hierarchical Profiler) captures execution time β€” how long each subprogram took, how many times it was called, and where the time hotspots are. It answers the question 'why is the code slow?' and is used for performance debugging. DBMS_TRACE with trace_all_lines has 2-10x overhead; DBMS_HPROF has lower overhead because it samples at the subprogram level. Neither is suitable for sustained production use. Use DBMS_TRACE when the problem is a wrong result or unexpected exception path. Use DBMS_HPROF when the problem is a procedure that runs correctly but too slowly.

Frequently Asked Questions

Why does DBMS_OUTPUT.PUT_LINE show no output?

The client has not enabled DBMS_OUTPUT retrieval. In SQL*Plus, run SET SERVEROUTPUT ON SIZE UNLIMITED before executing the PL/SQL block. In SQL Developer, open View > DBMS Output, click the green plus icon, and select your connection. Without this step, PUT_LINE writes to a session-level PGA buffer that is never read β€” the buffer fills silently until it overflows with ORA-20000: ORU-10027. Additionally, output is flushed only after the entire PL/SQL block completes β€” you cannot see line-by-line output during execution.

Can I use DBMS_OUTPUT in production?

No. DBMS_OUTPUT consumes PGA memory per session and requires a connected client to retrieve the output. In production, batch sessions and scheduler jobs run without a connected client β€” the buffer accumulates silently and can grow to gigabytes for high-volume processing. This can cause ORA-04030 (out of process memory) errors that affect all sessions on the instance. Use a structured logging package that writes to a table with configurable log levels instead.

How do I find the trace file generated by DBMS_TRACE?

DBMS_TRACE writes to the SYS.PLSQL_TRACE_EVENTS and SYS.PLSQL_TRACE_RUNS tables (after running $ORACLE_HOME/rdbms/admin/tracetab.sql to create them). Query these tables directly: SELECT event_seq, event_unit, event_line, event_kind FROM sys.plsql_trace_events WHERE runid = (SELECT MAX(runid) FROM sys.plsql_trace_runs) ORDER BY event_seq. For SQL trace files (enabled via ALTER SESSION SET SQL_TRACE = TRUE or DBMS_SESSION.SET_SQL_TRACE), the trace file location is: SELECT value FROM v$diag_info WHERE name = 'Diag Trace'. Use tkprof to format SQL trace files: tkprof <trace_file> <output_file> sys=no.

How do I debug a procedure that is called by another procedure?

Set a breakpoint in the inner procedure using SQL Developer's interactive debugger. Start debugging from the outer procedure (right-click > Debug). When the outer procedure calls the inner one, the debugger stops at the breakpoint in the inner procedure. Use Step Into (F7) to enter called procedures, Step Over (F8) to execute them without entering, and Step Out (Shift+F7) to finish the current procedure and return to the caller. Both procedures must be compiled with DEBUG symbols, and execution must happen in the same database connection the debugger is attached to.

What is the difference between trace_all_lines and trace_all_exceptions in DBMS_TRACE?

trace_all_lines records every PL/SQL line executed β€” complete execution visibility but large output and 2-10x performance overhead. trace_all_exceptions records only when exceptions are raised β€” much smaller output, significantly lower overhead, and focused exclusively on error paths. Use trace_all_lines in development when you need to understand which branches the code takes. Use trace_all_exceptions in staging or controlled investigation when you need to see which exceptions are raised without the full line-level overhead.

How do I make the structured logging package safe so it never crashes the calling procedure?

Two mechanisms are required. First, use PRAGMA AUTONOMOUS_TRANSACTION in the write_log procedure so the log INSERT is committed independently of the calling transaction β€” this also means a logging failure does not roll back the caller's work. Second, wrap the INSERT in a BEGIN/EXCEPTION block with a WHEN OTHERS handler that does ROLLBACK and returns silently. The logging package must never propagate an exception to the caller β€” a full tablespace, a constraint violation, or any other logging failure must be swallowed silently. The caller's business logic must succeed or fail on its own merits, not because the logging infrastructure had a problem.

πŸ”₯
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousSQL Developer Debugging Tutorial – Catch ORA-06512 FasterNext β†’Oracle PL/SQL Performance Tuning to Prevent ORA-06512 Timeouts
Forged with πŸ”₯ at TheCodeForge.io β€” Where Developers Are Forged