ORA-01422: Exact Fetch Returned More Than Requested Number of Rows
- ORA-01422 is always a bug β unlike ORA-01403, there is no valid business state where SELECT INTO should encounter unexpected duplicates
- When ORA-01422 appears across multiple procedures on the same table simultaneously, investigate the constraint status and duplicate count before reading any code
- The WHEN TOO_MANY_ROWS handler has one correct behavior: raise a diagnostic error with the duplicate key value. It should never return a default.
- ORA-01422 fires when SELECT INTO returns two or more rows β your WHERE clause matched more data than the one-row contract allows
- Unlike ORA-01403, ORA-01422 is always a bug β there is no business scenario where unexpected duplicates are a valid state
- The root cause is always one of three things: a dropped or disabled unique constraint, an insufficiently filtered WHERE clause, or duplicate data inserted without constraint enforcement
- Fix the data model first β add the missing unique constraint. Fix the code second β switch to BULK COLLECT if multiple rows are genuinely expected
- Never add ROWNUM = 1 without ORDER BY as a quick fix β it returns an arbitrary row and hides the underlying data problem
Need to find which values are duplicated in the failing table
SELECT employee_code, COUNT(*) AS dup_count FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC FETCH FIRST 20 ROWS ONLY;SELECT * FROM employees WHERE employee_code = 'EMP-0042' ORDER BY created_date DESC;Need to check if a unique constraint is active and enforcing
SELECT constraint_name, constraint_type, status, validated, last_change FROM user_constraints WHERE table_name = 'EMPLOYEES' AND constraint_type IN ('P', 'U') ORDER BY constraint_type, constraint_name;SELECT cols.column_name, cons.status, cons.validated FROM user_constraints cons JOIN user_cons_columns cols ON cols.constraint_name = cons.constraint_name WHERE cons.table_name = 'EMPLOYEES' AND cons.constraint_type = 'U' ORDER BY cols.position;Need to find recent DDL against the table to understand when the constraint was disabled
SELECT owner, obj_name, action_name, timestamp FROM dba_audit_trail WHERE obj_name = 'EMPLOYEES' AND action_name IN ('ALTER TABLE', 'DROP CONSTRAINT') ORDER BY timestamp DESC FETCH FIRST 10 ROWS ONLY;SELECT object_name, object_type, last_ddl_time FROM all_objects WHERE object_name = 'EMPLOYEES' ORDER BY last_ddl_time DESC;Need to check all disabled unique constraints across the schema
SELECT table_name, constraint_name, status, validated FROM user_constraints WHERE constraint_type = 'U' AND (status != 'ENABLED' OR validated != 'VALIDATED') ORDER BY table_name;SELECT table_name, COUNT(*) AS disabled_constraint_count FROM user_constraints WHERE constraint_type = 'U' AND (status != 'ENABLED' OR validated != 'VALIDATED') GROUP BY table_name ORDER BY COUNT(*) DESC;Production Incident
ROW_NUMBER() OVER (PARTITION BY employee_code ORDER BY created_date DESC) to keep the most recent record per code. Re-enabled the constraint with ENABLE VALIDATE β not NOVALIDATE β which forced Oracle to verify all existing rows before marking the constraint active. Structural: added a post-deployment verification step that queries USER_CONSTRAINTS to confirm status = ENABLED and validated = VALIDATED for every critical constraint after any bulk operation. Added a scheduled monitoring job that alerts within five minutes if any unique constraint on a core table transitions to a non-ENABLED or non-VALIDATED state.Production Debug GuideFrom error message to root cause resolution
ORA-01422: exact fetch returned more than requested number of rows occurs when a SELECT INTO statement returns two or more rows. The SELECT INTO construct enforces a strict one-row contract β zero rows raises ORA-01403, multiple rows raises ORA-01422. Both are runtime exceptions that crash the executing block unless handled.
This error is categorically different from ORA-01403. ORA-01403 can be a valid business state β a new user with no saved preferences, a report with no data for a date range. ORA-01422 is never valid. If your SELECT INTO returns multiple rows, your data model has a problem: a uniqueness constraint that is missing, disabled, or bypassed.
In production Oracle systems, ORA-01422 tends to appear suddenly on code that worked for months. The trigger is almost always a data model event β a constraint disabled for a bulk load and never re-enabled, a new feature that inserts data without deduplication, or a migration that bypassed normal application logic. The code did not change. The data did.
The error propagates through the PL/SQL call stack as ORA-06512 entries, just like any other exception. Engineers reading the stack top-down see ORA-06512 and miss the underlying ORA-01422. Reading bottom-up identifies the specific SELECT INTO that failed and the table where duplicates accumulated.
All examples in this guide are runnable on Oracle 19c, 21c, and 23ai.
Why SELECT INTO Raises ORA-01422
ORA-01422 fires when a SELECT INTO statement returns two or more rows. The SQL engine enforces the one-row contract at execution time β it executes the query, counts the results, and raises ORA-01422 before returning control to PL/SQL. There is no opportunity to inspect the row count before the exception fires.
The one-row contract is absolute. If the WHERE clause matches zero rows, ORA-01403 fires. If it matches exactly one row, execution continues. If it matches two or more rows, ORA-01422 fires. The PL/SQL variable is never populated in the error cases β execution jumps directly to the exception handler.
The distinction between ORA-01403 and ORA-01422 matters for how you respond. ORA-01403 can be a valid state β a new user may have no preferences yet, a report may have no data for the selected period. You can sometimes handle it with a sensible default. ORA-01422 has no valid state equivalent. If your SELECT INTO can return multiple rows, your query does not uniquely identify the entity it is trying to fetch. That is always a bug.
Common production triggers: a bulk load that disables a unique constraint for performance and fails to re-enable it correctly; a data migration that bypasses application-layer deduplication logic; a new feature that inserts records without the deduplication the legacy code assumed; a soft-delete pattern where records are marked inactive but not excluded from queries that use the wrong WHERE clause.
-- ============================================================ -- Setup: demonstrate the ORA-01422 failure condition -- Runnable on Oracle 19c, 21c, 23ai -- ============================================================ CREATE TABLE demo_employees ( employee_id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, employee_code VARCHAR2(20) NOT NULL, full_name VARCHAR2(100) NOT NULL, department VARCHAR2(50), created_date DATE DEFAULT SYSDATE ); -- Insert one clean record and one duplicate INSERT INTO demo_employees (employee_code, full_name, department) VALUES ('EMP-001', 'Alice Chen', 'Engineering'); INSERT INTO demo_employees (employee_code, full_name, department) VALUES ('EMP-001', 'Alice Chen (duplicate)', 'Engineering'); -- Intentional duplicate INSERT INTO demo_employees (employee_code, full_name, department) VALUES ('EMP-002', 'Bob Ortiz', 'Finance'); -- Unique COMMIT; -- ============================================================ -- Demo 1: ORA-01422 fires on the duplicate employee_code -- ============================================================ DECLARE v_name VARCHAR2(100); BEGIN SELECT full_name INTO v_name FROM demo_employees WHERE employee_code = 'EMP-001'; -- Two rows match DBMS_OUTPUT.PUT_LINE('Name: ' || v_name); EXCEPTION WHEN TOO_MANY_ROWS THEN DBMS_OUTPUT.PUT_LINE('ORA-01422: EMP-001 has duplicates'); DBMS_OUTPUT.PUT_LINE('This is a data model bug, not a code bug'); END; / -- ============================================================ -- Demo 2: Same table, unique code β SELECT INTO works -- ============================================================ DECLARE v_name VARCHAR2(100); BEGIN SELECT full_name INTO v_name FROM demo_employees WHERE employee_code = 'EMP-002'; -- Exactly one row DBMS_OUTPUT.PUT_LINE('Name: ' || v_name); END; / -- ============================================================ -- Demo 3: Find duplicates before they cause ORA-01422 -- Run this during data quality checks -- ============================================================ SELECT employee_code, COUNT(*) AS row_count, LISTAGG(employee_id, ', ') WITHIN GROUP (ORDER BY employee_id) AS employee_ids FROM demo_employees GROUP BY employee_code HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC; -- Expected output: -- EMPLOYEE_CODE ROW_COUNT EMPLOYEE_IDS -- EMP-001 2 1, 2 -- Cleanup -- DROP TABLE demo_employees PURGE;
- ORA-01403 β zero rows: the data does not exist. May be valid (new user, empty report) or a bug (missing config). Handle with a default or propagate depending on business semantics.
- ORA-01422 β multiple rows: more data exists than your uniqueness model allows. Always a bug. Never handle with a default β fix the data model.
- Both fire before the PL/SQL variable is populated β the variable remains uninitialized after either exception.
- Both propagate through the call stack as ORA-06512 β read the stack bottom-up to find the failing SELECT INTO.
- The question for ORA-01403 is: is missing data valid here? The question for ORA-01422 is: why does duplicate data exist at all?
Diagnosing the Root Cause: Constraints and Duplicate Data
Diagnosing ORA-01422 follows a specific sequence. Start with the error stack, work down to the table, then check constraints, then count duplicates, then trace when the duplicates were created.
Step 1: Read the error stack bottom-up. The deepest ORA-06512 entry identifies the object name and line number of the failing SELECT INTO. Map that line number to source code using ALL_SOURCE.
Step 2: Extract the table and WHERE clause from the failing SELECT INTO. Run the SELECT manually without the INTO clause to see all matching rows for the failing input value.
Step 3: Check USER_CONSTRAINTS for the table. Look specifically at constraint_type = U (unique) and verify both columns: status must be ENABLED and validated must be VALIDATED. A constraint showing ENABLED with NOT VALIDATED is not enforcing uniqueness against existing data.
Step 4: Count the duplicates. Run a GROUP BY with HAVING COUNT(*) > 1 on the WHERE clause columns. This tells you how many values are duplicated and how many copies each has.
Step 5: Trace when the duplicates were created. Compare the created_date of the duplicate rows against the timeline of deployments, bulk loads, and migrations. The root cause is almost always a specific event.
In Oracle 23ai, the DATABASE_PROPERTIES view and the SQL Domain feature provide additional constraint-like enforcement options at the column level. If you are running 23ai, evaluate whether SQL Domains would have prevented the duplicate insertion before the constraint layer.
-- ============================================================ -- Complete ORA-01422 diagnostic sequence -- Run these queries in order after an ORA-01422 incident -- ============================================================ -- Step 1: Map the ORA-06512 line number to source code -- Replace PKG_NAME and the line range with values from your error stack SELECT line, text FROM all_source WHERE name = 'EMPLOYEE_PKG' AND type = 'PACKAGE BODY' AND line BETWEEN 44 AND 52 ORDER BY line; -- Step 2: Run the failing query without INTO to see all matching rows -- Replace employee_code value with the input that triggered the error SELECT employee_id, employee_code, full_name, created_date FROM employees WHERE employee_code = 'EMP-0042' ORDER BY created_date DESC; -- Step 3: Check constraint status β both columns matter SELECT c.constraint_name, c.constraint_type, c.status, c.validated, c.rely, c.last_change, col.column_name FROM user_constraints c JOIN user_cons_columns col ON col.constraint_name = c.constraint_name WHERE c.table_name = 'EMPLOYEES' AND c.constraint_type IN ('P', 'U') ORDER BY c.constraint_type, col.position; -- If validated = NOT VALIDATED: -- The constraint is marked ENABLED but is not enforcing existing data -- Existing duplicates are possible even though status shows ENABLED -- Step 4: Count all duplicates across the table SELECT employee_code, COUNT(*) AS row_count, MIN(created_date) AS oldest_row, MAX(created_date) AS newest_row, LISTAGG(employee_id, ', ') WITHIN GROUP (ORDER BY created_date DESC) AS ids_newest_first FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC FETCH FIRST 50 ROWS ONLY; -- Step 5: Identify when duplicates were inserted -- Group by date to identify the event that caused accumulation SELECT TRUNC(created_date) AS insert_date, COUNT(*) AS rows_inserted FROM ( -- Only rows that are duplicates SELECT e.employee_id, e.created_date FROM employees e WHERE e.employee_code IN ( SELECT employee_code FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ) ) GROUP BY TRUNC(created_date) ORDER BY insert_date DESC; -- If a single date shows a spike in inserts: -- That date is your event β check deployment logs and bulk load history -- Step 6: Verify the object compilation timestamp -- If the package was recompiled after the error, line numbers in the stack may be stale SELECT object_name, object_type, status, last_ddl_time FROM all_objects WHERE object_name = 'EMPLOYEE_PKG' AND object_type = 'PACKAGE BODY';
Handling TOO_MANY_ROWS: What the Handler Should and Should Not Do
WHEN TOO_MANY_ROWS is the specific exception handler for ORA-01422 in PL/SQL. It exists as a named exception constant, which means you can and should use it specifically rather than relying on WHEN OTHERS to catch it.
The handler has one correct purpose: provide diagnostic context and raise a custom error that includes the duplicate key value. It should never return a default value. Returning a default when multiple rows match hides a data model problem. The caller has no idea that incorrect data was used.
The handler should always include the input parameter value that triggered the exception. During incident response, the on-call engineer needs to know immediately which value has duplicates. An error message that says 'too many rows' is useless. An error message that says 'ORA-01422 on employees.employee_code = EMP-0042 β 3 duplicate rows found' is actionable in seconds.
Handle WHEN TOO_MANY_ROWS separately from WHEN NO_DATA_FOUND. Both are SELECT INTO failures, but they indicate different root causes and require different fixes. Merging them into WHEN OTHERS obscures the distinction.
-- ============================================================ -- WHEN TOO_MANY_ROWS: correct handler pattern -- ============================================================ CREATE OR REPLACE FUNCTION get_employee_by_code( p_employee_code IN VARCHAR2 ) RETURN employees%ROWTYPE IS v_result employees%ROWTYPE; BEGIN SELECT * INTO v_result FROM employees WHERE employee_code = p_employee_code; RETURN v_result; EXCEPTION WHEN NO_DATA_FOUND THEN -- Zero rows: employee does not exist -- Decide here based on business semantics RAISE_APPLICATION_ERROR( -20002, 'Employee not found: ' || p_employee_code ); WHEN TOO_MANY_ROWS THEN -- Multiple rows: data model bug β unique constraint missing or disabled -- Include the duplicate key in the message β required for immediate diagnosis RAISE_APPLICATION_ERROR( -20003, 'ORA-01422: duplicate employee_code detected: ' || p_employee_code || ' | Unique constraint on employees.employee_code is missing or disabled.' || ' | Query: SELECT COUNT(*) FROM employees WHERE employee_code = ''' || p_employee_code || ''' to confirm.' ); -- DO NOT return a default value here -- The caller would receive incorrect data with no indication of the problem END get_employee_by_code; / -- ============================================================ -- Deduplication pattern: remove duplicates before adding constraint -- Keeps the most recent row per employee_code -- ============================================================ -- Step 1: Preview what will be deleted β review before executing SELECT employee_id, employee_code, full_name, created_date, ROW_NUMBER() OVER ( PARTITION BY employee_code ORDER BY created_date DESC, employee_id DESC ) AS rn FROM employees WHERE employee_code IN ( SELECT employee_code FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ) ORDER BY employee_code, rn; -- Step 2: Delete all but the most recent row per duplicate code -- rn = 1 is the keeper; rn > 1 are the duplicates DELETE FROM employees WHERE employee_id IN ( SELECT employee_id FROM ( SELECT employee_id, ROW_NUMBER() OVER ( PARTITION BY employee_code ORDER BY created_date DESC, employee_id DESC ) AS rn FROM employees ) WHERE rn > 1 ); -- Step 3: Verify no duplicates remain before adding the constraint SELECT COUNT(*) AS remaining_duplicates FROM ( SELECT employee_code FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ); -- Must return 0 before proceeding -- Step 4: Add and validate the unique constraint ALTER TABLE employees ADD CONSTRAINT uk_employees_employee_code UNIQUE (employee_code) ENABLE VALIDATE; -- VALIDATE explicitly β never NOVALIDATE for new constraints -- Step 5: Confirm constraint is active and enforcing SELECT constraint_name, status, validated FROM user_constraints WHERE constraint_name = 'UK_EMPLOYEES_EMPLOYEE_CODE'; -- Expected: status = ENABLED, validated = VALIDATED COMMIT;
BULK COLLECT: The Right Pattern When Multiple Rows Are Genuinely Expected
Not every query that returns multiple rows is a bug. Some business operations genuinely retrieve a set of rows β all active price tiers for a product, all addresses for a customer, all approval history for a document. For these cases, SELECT INTO is the wrong tool. BULK COLLECT is the right one.
BULK COLLECT returns a collection β an array-like type that holds zero, one, or many rows. No exception is raised regardless of how many rows the query returns. The caller inspects the collection count and applies business logic to decide what to do.
The critical distinction is intent. If you expect exactly one row and the business rule should guarantee it, use SELECT INTO and add the unique constraint. If you expect a variable number of rows and your code processes them all, use BULK COLLECT. The choice reflects the data model design, not a workaround for ORA-01422.
For cases where you want the single most recent or highest-priority row from a potentially multi-row result set, use BULK COLLECT with ORDER BY. Fetch all matching rows ordered by the priority criterion and take the first element. This is deterministic β ORDER BY guarantees which row comes first. This is the correct replacement for SELECT INTO with ROWNUM = 1, which is non-deterministic without ORDER BY.
In Oracle 12c and later, the SQL row-limiting clause (FETCH FIRST n ROWS ONLY) provides an alternative for simple cases. For complex selection logic, BULK COLLECT gives more control.
-- ============================================================ -- BULK COLLECT patterns for multi-row queries -- Runnable on Oracle 19c, 21c, 23ai -- ============================================================ -- ============================================================ -- Pattern 1: BULK COLLECT for a genuinely multi-row result -- Use when the caller needs all matching rows -- ============================================================ CREATE OR REPLACE FUNCTION get_active_addresses( p_customer_id IN NUMBER ) RETURN SYS_REFCURSOR IS v_cursor SYS_REFCURSOR; BEGIN OPEN v_cursor FOR SELECT address_id, address_line1, city, postal_code, address_type FROM customer_addresses WHERE customer_id = p_customer_id AND active_flag = 'Y' ORDER BY address_type, created_date DESC; RETURN v_cursor; -- REF CURSOR: caller iterates all rows -- No ORA-01422 possible END get_active_addresses; / -- ============================================================ -- Pattern 2: BULK COLLECT with ORDER BY β deterministic single row -- Use when you want the most recent or highest-priority match -- Replaces the SELECT INTO + ROWNUM = 1 anti-pattern -- ============================================================ CREATE OR REPLACE FUNCTION get_current_price( p_product_id IN NUMBER, p_price_date IN DATE DEFAULT SYSDATE ) RETURN NUMBER IS TYPE price_tab IS TABLE OF NUMBER; v_prices price_tab; BEGIN SELECT price BULK COLLECT INTO v_prices FROM product_prices WHERE product_id = p_product_id AND effective_date <= p_price_date AND (expiry_date IS NULL OR expiry_date > p_price_date) ORDER BY effective_date DESC; -- Most recent first -- No FETCH FIRST needed β we take element 1 below IF v_prices.COUNT = 0 THEN -- No price configured β this is a data error, not a default case RAISE_APPLICATION_ERROR( -20010, 'No price configured for product_id = ' || p_product_id || ' as of ' || TO_CHAR(p_price_date, 'YYYY-MM-DD') ); END IF; -- v_prices(1) is the most recent effective price β deterministic RETURN v_prices(1); END get_current_price; / -- ============================================================ -- Pattern 3: Oracle 12c+ FETCH FIRST for simple deterministic lookups -- Cleaner syntax for single-column fetches -- ============================================================ DECLARE v_price NUMBER; BEGIN SELECT price INTO v_price FROM product_prices WHERE product_id = 42 AND effective_date <= SYSDATE AND (expiry_date IS NULL OR expiry_date > SYSDATE) ORDER BY effective_date DESC FETCH FIRST 1 ROWS ONLY; -- FETCH FIRST 1 ROWS ONLY: returns at most one row -- Zero rows still raises ORA-01403 β handle WHEN NO_DATA_FOUND separately DBMS_OUTPUT.PUT_LINE('Current price: ' || v_price); EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('No price configured for this date'); END; / -- ============================================================ -- Pattern 4: NVL + aggregate for scalar results -- Never raises ORA-01403 or ORA-01422 -- Use when you need one computed value from many rows -- ============================================================ CREATE OR REPLACE FUNCTION get_customer_total_balance( p_customer_id IN NUMBER ) RETURN NUMBER IS v_total NUMBER; BEGIN SELECT NVL(SUM(balance), 0) INTO v_total FROM accounts WHERE customer_id = p_customer_id AND status = 'ACTIVE'; -- SUM on zero rows returns NULL; NVL converts to 0 -- SUM on multiple rows returns the aggregate β exactly one row always RETURN v_total; END get_customer_total_balance; /
- SELECT INTO: I expect exactly one row and the data model should guarantee it. If it does not, I want an exception.
- BULK COLLECT: I expect zero or more rows and my code will process however many come back.
- FETCH FIRST 1 ROWS ONLY (Oracle 12c+): I want at most one row and I will handle zero rows separately. ORDER BY determines which row I get.
- NVL + aggregate: I want one computed value from many rows. I never want an exception from this query.
- Do not choose BULK COLLECT to avoid ORA-01422 when the business rule requires uniqueness β fix the uniqueness rule instead.
The ROWNUM = 1 Anti-Pattern: Why It Is Worse Than the Error It Fixes
When ORA-01422 appears, the fastest apparent fix is adding AND ROWNUM = 1 to the WHERE clause. The error disappears. Tests pass. The on-call alert clears. Three weeks later, a financial reconciliation finds that payments were processed against the wrong account balance.
ROWNUM = 1 without ORDER BY is non-deterministic. Oracle uses ROWNUM to limit result set size after the access path decision β it returns whichever rows the execution plan retrieves first. In a development database with sequential inserts, that is usually the oldest row. In production with concurrent DML, buffer cache effects, and parallel query, it can be any row.
The behavior is consistent enough to pass testing and inconsistent enough to cause production failures. This is the worst possible property for a code change.
The intent of ROWNUM = 1 is almost always to get the most recent record or the canonical record. That intent requires ORDER BY to be expressed correctly. FETCH FIRST 1 ROWS ONLY with ORDER BY is the Oracle 12c+ syntax that accurately expresses this intent. ROWNUM = 1 with an outer query ORDER BY also works but is less readable.
The underlying data model problem β the duplicates, the missing constraint β is untouched by ROWNUM = 1. The duplicates continue to accumulate. Other queries against the same table that are not protected by ROWNUM = 1 continue to fail. The constraint monitoring will still fire. The fix created a local workaround while leaving the system-level problem intact.
-- ============================================================ -- The ROWNUM = 1 anti-pattern: non-deterministic results -- ============================================================ -- Setup: two rows for the same employee_code INSERT INTO demo_employees (employee_code, full_name, department, created_date) VALUES ('EMP-003', 'Carol Davis', 'Operations', DATE '2024-01-15'); INSERT INTO demo_employees (employee_code, full_name, department, created_date) VALUES ('EMP-003', 'Carol Davis (transferred)', 'Finance', DATE '2025-06-01'); COMMIT; -- ============================================================ -- Anti-pattern: ROWNUM = 1 without ORDER BY -- Which row comes back? Depends on the execution plan. -- The answer changes with statistics updates, parallel query, -- buffer cache state, and insert order. -- ============================================================ DECLARE v_name VARCHAR2(100); BEGIN SELECT full_name INTO v_name FROM demo_employees WHERE employee_code = 'EMP-003' AND ROWNUM = 1; -- Non-deterministic: returns whichever row the plan finds first DBMS_OUTPUT.PUT_LINE('Got: ' || v_name); -- In dev: probably 'Carol Davis' (older, inserted first) -- In prod: could be either, depending on concurrent activity END; / -- ============================================================ -- Correct alternative 1: FETCH FIRST with ORDER BY (Oracle 12c+) -- Deterministic β always returns the most recent record -- ============================================================ DECLARE v_name VARCHAR2(100); BEGIN SELECT full_name INTO v_name FROM demo_employees WHERE employee_code = 'EMP-003' ORDER BY created_date DESC FETCH FIRST 1 ROWS ONLY; -- Deterministic: most recent row DBMS_OUTPUT.PUT_LINE('Most recent: ' || v_name); -- Always returns 'Carol Davis (transferred)' β the June 2025 record EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('Employee not found'); -- FETCH FIRST 1 ROWS ONLY still raises ORA-01403 on zero rows -- Handle WHEN NO_DATA_FOUND as you would for any SELECT INTO END; / -- ============================================================ -- Correct alternative 2: ROWNUM applied to an ordered subquery -- Use when FETCH FIRST is not available (pre-12c) -- ============================================================ DECLARE v_name VARCHAR2(100); BEGIN SELECT full_name INTO v_name FROM ( SELECT full_name FROM demo_employees WHERE employee_code = 'EMP-003' ORDER BY created_date DESC -- ORDER BY inside the subquery ) WHERE ROWNUM = 1; -- Applied after ordering β deterministic DBMS_OUTPUT.PUT_LINE('Most recent: ' || v_name); EXCEPTION WHEN NO_DATA_FOUND THEN DBMS_OUTPUT.PUT_LINE('Employee not found'); END; / -- ============================================================ -- The only real fix: add the unique constraint -- ROWNUM = 1 is a workaround. The constraint is the fix. -- ============================================================ -- First: remove the duplicate we inserted DELETE FROM demo_employees WHERE employee_id IN ( SELECT employee_id FROM ( SELECT employee_id, ROW_NUMBER() OVER ( PARTITION BY employee_code ORDER BY created_date DESC, employee_id DESC ) AS rn FROM demo_employees WHERE employee_code = 'EMP-003' ) WHERE rn > 1 ); COMMIT; -- Then: add the constraint so the duplicate cannot return ALTER TABLE demo_employees ADD CONSTRAINT uk_demo_employees_code UNIQUE (employee_code) ENABLE VALIDATE;
Unique Constraints: The Permanent Prevention Layer
The permanent fix for ORA-01422 is a unique constraint on the columns that the SELECT INTO queries. A constraint that says one row per employee_code enforced at the database level means the application never has to check, never has to handle, and never has to recover from duplicates in that column.
Unique constraints serve two purposes. They prevent duplicate inserts proactively β any INSERT or UPDATE that would create a duplicate raises ORA-00001 immediately. They guarantee that SELECT INTO will never raise ORA-01422 reactively β the uniqueness is structural, not dependent on application code quality.
Adding a unique constraint to a table with existing duplicates fails. Oracle validates all existing rows when creating a unique constraint with ENABLE VALIDATE (the default and correct behavior). Deduplication must happen first.
After deduplication and constraint creation, monitor the constraint status in production. Check both status and validated. A constraint that transitions to ENABLED / NOT VALIDATED after a bulk operation is providing false security β new inserts are prevented but existing duplicates from the bulk operation are not detected.
Oracle 23ai introduced SQL Domains β schema-level objects that define reusable constraints and properties for columns. If you are running 23ai, a SQL Domain with a NOT NULL and UNIQUE annotation provides another layer of uniqueness enforcement that can be applied consistently across all columns that share the same data semantics.
-- ============================================================ -- Complete unique constraint management workflow -- ============================================================ -- Step 1: Audit existing duplicates -- Always run this before attempting to add a unique constraint SELECT employee_code, COUNT(*) AS duplicate_count, LISTAGG(employee_id, ', ') WITHIN GROUP (ORDER BY created_date DESC) AS ids_newest_first, MAX(created_date) AS most_recent_insert FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC; -- If this returns any rows, proceed to Step 2 -- If no rows, jump to Step 3 -- Step 2: Deduplicate β keep the most recent row per code -- Preview first: verify the rows being deleted before executing SELECT employee_id, employee_code, full_name, created_date, ROW_NUMBER() OVER ( PARTITION BY employee_code ORDER BY created_date DESC, employee_id DESC ) AS rn -- rn = 1 is the keeper; rn > 1 are duplicates FROM employees WHERE employee_code IN ( SELECT employee_code FROM employees GROUP BY employee_code HAVING COUNT(*) > 1 ) ORDER BY employee_code, rn; -- Execute deduplication: DELETE FROM employees WHERE employee_id IN ( SELECT employee_id FROM ( SELECT employee_id, ROW_NUMBER() OVER ( PARTITION BY employee_code ORDER BY created_date DESC, employee_id DESC ) AS rn FROM employees ) WHERE rn > 1 ); -- Verify: should return 0 before adding the constraint SELECT COUNT(*) AS remaining_duplicates FROM (SELECT employee_code FROM employees GROUP BY employee_code HAVING COUNT(*) > 1); COMMIT; -- Step 3: Add the unique constraint with ENABLE VALIDATE -- ENABLE VALIDATE (default): verifies all existing rows -- ENABLE NOVALIDATE: skips existing rows β never use for new constraints ALTER TABLE employees ADD CONSTRAINT uk_employees_employee_code UNIQUE (employee_code) ENABLE VALIDATE; -- Explicit for clarity β do not rely on the default -- Step 4: Verify constraint is ENABLED and VALIDATED SELECT constraint_name, status, validated, last_change FROM user_constraints WHERE constraint_name = 'UK_EMPLOYEES_EMPLOYEE_CODE'; -- Required output: status = ENABLED, validated = VALIDATED -- Step 5: Re-enable a disabled constraint after a bulk load -- (The correct way β not ENABLE NOVALIDATE) -- If duplicates were introduced during the bulk load: DELETE FROM employees -- Deduplicate first (Step 2 above) WHERE employee_id IN (...); COMMIT; -- Then re-enable with full validation: ALTER TABLE employees ENABLE VALIDATE CONSTRAINT uk_employees_employee_code; -- Step 6: Monitor constraint status β run daily via scheduled job -- Alert if any unique constraint is not ENABLED + VALIDATED SELECT table_name, constraint_name, status, validated, last_change FROM user_constraints WHERE constraint_type = 'U' AND (status != 'ENABLED' OR validated != 'VALIDATED') ORDER BY table_name, constraint_name; -- Zero rows expected β any result is an alert condition -- ============================================================ -- Oracle 23ai: SQL Domains for reusable column constraints -- Provides an additional enforcement layer above the table constraint -- ============================================================ -- Create a domain that enforces uniqueness semantics at the type level -- (Oracle 23ai feature β requires 23ai or later) CREATE DOMAIN employee_code_domain AS VARCHAR2(20) NOT NULL CONSTRAINT chk_employee_code_format CHECK (REGEXP_LIKE(VALUE, '^EMP-[0-9]{3,}$')); -- Apply the domain to the column ALTER TABLE employees MODIFY employee_code employee_code_domain; -- The CHECK constraint from the domain is now enforced at insert time -- The unique constraint (Step 3 above) is still required for uniqueness -- Domains add format validation; constraints add uniqueness enforcement
- Every SELECT INTO that queries by a business key (not just primary key) must have a unique constraint on that key
- Deduplicate before adding the constraint β ALTER TABLE ADD CONSTRAINT ENABLE VALIDATE fails if duplicates exist
- Always use ENABLE VALIDATE when adding or re-enabling a constraint β ENABLE NOVALIDATE creates a false security guarantee
- Monitor both status and validated daily β a constraint with status = ENABLED and validated = NOT VALIDATED is not protecting existing data
Monitoring: Catching Constraint Violations Before They Cause ORA-01422
Prevention operates at three levels: constraint enforcement stops new duplicates at insert time, duplicate detection scans catches duplicates that bypassed enforcement, and error log analysis identifies patterns across procedures and tables.
Constraint monitoring queries USER_CONSTRAINTS every five minutes and alerts when any unique constraint on a critical table is not ENABLED and VALIDATED. This is the first detection layer β it catches the constraint disable event before duplicates accumulate.
Duplicate detection runs daily as a data quality check. It queries critical tables for existing duplicates and raises an alert if any are found. This catches the edge cases that bypass constraint enforcement: direct database access by DBAs, external system integrations that use service accounts with elevated privileges, and any process that uses APPEND hints or direct path inserts which can bypass constraint checking.
Error log analysis queries the application error log for ORA-01422 occurrences, grouped by module and table. This identifies patterns β if the same table appears in ORA-01422 errors across different procedures, the constraint is insufficient or missing. If ORA-01422 errors suddenly spike after a deployment, the deployment introduced a data change that created duplicates.
-- ============================================================ -- Layer 1: Constraint status monitoring -- Schedule every 5 minutes via DBMS_SCHEDULER -- ============================================================ CREATE OR REPLACE PROCEDURE monitor_constraint_status IS v_disabled_count NUMBER; v_detail VARCHAR2(4000); BEGIN -- Count disabled or unvalidated unique constraints SELECT COUNT(*) INTO v_disabled_count FROM user_constraints WHERE constraint_type = 'U' AND (status != 'ENABLED' OR validated != 'VALIDATED'); IF v_disabled_count > 0 THEN -- Build detail list for the alert SELECT LISTAGG( table_name || '.' || constraint_name || ' [' || status || '/' || validated || ']', ', ' ) WITHIN GROUP (ORDER BY table_name) INTO v_detail FROM user_constraints WHERE constraint_type = 'U' AND (status != 'ENABLED' OR validated != 'VALIDATED'); -- Log and raise β will trigger your alerting integration RAISE_APPLICATION_ERROR( -20050, v_disabled_count || ' unique constraint(s) are not fully active: ' || v_detail ); END IF; END monitor_constraint_status; / -- Schedule via DBMS_SCHEDULER BEGIN DBMS_SCHEDULER.CREATE_JOB( job_name => 'JOB_MONITOR_CONSTRAINTS', job_type => 'STORED_PROCEDURE', job_action => 'MONITOR_CONSTRAINT_STATUS', start_date => SYSTIMESTAMP, repeat_interval => 'FREQ=MINUTELY; INTERVAL=5', enabled => TRUE, comments => 'Alerts when unique constraints are disabled or unvalidated' ); END; / -- ============================================================ -- Layer 2: Daily duplicate detection -- Catches duplicates that bypassed constraint enforcement -- Schedule once daily via DBMS_SCHEDULER -- ============================================================ CREATE OR REPLACE PROCEDURE detect_critical_duplicates IS v_sql VARCHAR2(4000); v_dup_count NUMBER; -- Define critical tables and columns to check -- Update this list when new business-key columns are added TYPE check_rec IS RECORD ( tbl_name VARCHAR2(128), col_name VARCHAR2(128) ); TYPE check_tab IS TABLE OF check_rec; v_checks check_tab; BEGIN v_checks := check_tab( check_rec('EMPLOYEES', 'EMPLOYEE_CODE'), check_rec('ACCOUNTS', 'ACCOUNT_NUMBER'), check_rec('PRODUCTS', 'PRODUCT_SKU'), check_rec('CUSTOMERS', 'EMAIL'), check_rec('ORDERS', 'ORDER_REFERENCE') ); FOR i IN 1 .. v_checks.COUNT LOOP -- Validate names against the data dictionary before concatenation v_sql := 'SELECT COUNT(*) FROM (' || ' SELECT ' || DBMS_ASSERT.SIMPLE_SQL_NAME(v_checks(i).col_name) || ' FROM ' || DBMS_ASSERT.SQL_OBJECT_NAME(v_checks(i).tbl_name) || ' GROUP BY ' || DBMS_ASSERT.SIMPLE_SQL_NAME(v_checks(i).col_name) || ' HAVING COUNT(*) > 1' || ')'; EXECUTE IMMEDIATE v_sql INTO v_dup_count; IF v_dup_count > 0 THEN RAISE_APPLICATION_ERROR( -20051, 'Duplicate values detected: ' || v_dup_count || ' duplicate value(s) in ' || v_checks(i).tbl_name || '.' || v_checks(i).col_name || ' | These will cause ORA-01422 on SELECT INTO queries against this column.' ); END IF; END LOOP; DBMS_OUTPUT.PUT_LINE('Duplicate detection: all checks passed'); END detect_critical_duplicates; / -- ============================================================ -- Layer 3: Error log analysis -- Run weekly to identify recurring ORA-01422 patterns -- ============================================================ -- Recurring ORA-01422 by module β identifies affected procedures SELECT module, COUNT(*) AS error_count, MIN(created_at) AS first_occurrence, MAX(created_at) AS last_occurrence, ROUND( COUNT(*) / NULLIF( EXTRACT(DAY FROM (MAX(created_at) - MIN(created_at))), 0 ), 1 ) AS errors_per_day FROM app_error_log WHERE error_stack LIKE '%ORA-01422%' AND created_at > SYSTIMESTAMP - INTERVAL '30' DAY GROUP BY module ORDER BY COUNT(*) DESC; -- ORA-01422 spike detection β correlates with deployment events SELECT TRUNC(created_at, 'HH') AS hour_bucket, COUNT(*) AS error_count FROM app_error_log WHERE error_stack LIKE '%ORA-01422%' AND created_at > SYSTIMESTAMP - INTERVAL '7' DAY GROUP BY TRUNC(created_at, 'HH') ORDER BY hour_bucket DESC;
- Layer 1 β Constraint monitoring every 5 minutes: catches the moment a constraint is disabled. Three days of duplicate accumulation becomes five minutes.
- Layer 2 β Daily duplicate detection: catches duplicates from direct database access, data imports, and external systems that bypass application constraints.
- Layer 3 β Weekly error log analysis: reveals patterns β multiple procedures failing on the same table means the uniqueness model is incomplete.
- All three layers cost a few scheduled queries. Runtime ORA-01422 failures cost hours of incident response and potentially weeks of data correction.
ORA-01422 Through Application Drivers
ORA-01422 appears through JDBC, python-oracledb, and node-oracledb with the same error code (1422) preserved in the driver exception. The reading approach is identical regardless of the language layer: check the numeric error code, read the full message for context, and apply the same bottom-up stack analysis.
Application-level code should distinguish ORA-01422 from other Oracle errors because it indicates a data model problem β not a transient failure that should be retried, not a user input validation error that maps to HTTP 400. In a REST API context, ORA-01422 is typically an HTTP 500 β it represents a server-side data integrity issue that the client cannot resolve. Log it with full context and alert the team.
For microservices that call Oracle through a data access layer, make sure the Oracle error code is preserved through every abstraction layer. Custom exception types that wrap Oracle errors should carry the original ORA code as a field, not just the message string.
============================================================ JAVA / JDBC β ORA-01422 handling ============================================================ try { String name = employeeRepo.getByCode("EMP-0042"); } catch (SQLException e) { switch (e.getErrorCode()) { case 1422: // ORA-01422: TOO_MANY_ROWS // Data model bug β unique constraint missing or disabled // This is a 500, not a 400 or 404 logger.error("Data integrity violation: ORA-01422", Map.of( "oraCode", e.getErrorCode(), "message", e.getMessage(), // Includes employee_code if handler adds it "procedure", "getByCode" )); // Alert the on-call team alertingService.critical("ORA-01422: duplicate data in employees table"); throw new DataIntegrityException("Duplicate records found", e); case 1403: // ORA-01403: NO_DATA_FOUND β separate case, different handling throw new NotFoundException("Employee not found", e); default: logger.error("Unexpected Oracle error", Map.of("oraCode", e.getErrorCode(), "message", e.getMessage())); throw e; } } ============================================================ PYTHON / python-oracledb β ORA-01422 handling ============================================================ import oracledb try: employee = employee_repo.get_by_code('EMP-0042') except oracledb.DatabaseError as e: error = e.args[0] if error.code == 1422: # ORA-01422: TOO_MANY_ROWS logger.error('Data integrity violation: ORA-01422', extra={ 'ora_code': error.code, 'message': error.message, 'procedure': 'get_by_code' }) alert_team('critical', f'ORA-01422: duplicates in employees') raise DataIntegrityError('Duplicate records found') from e elif error.code == 1403: raise RecordNotFoundError('Employee not found') from e else: raise ============================================================ NODE.JS / node-oracledb β ORA-01422 handling ============================================================ const oracledb = require('oracledb'); try { const employee = await employeeRepo.getByCode('EMP-0042'); } catch (err) { if (err.errorNum === 1422) { // ORA-01422: TOO_MANY_ROWS console.error(JSON.stringify({ level: 'critical', oraCode: err.errorNum, message: err.message, procedure: 'getByCode' })); await alertingService.critical('ORA-01422: duplicate employee records'); throw new DataIntegrityError('Duplicate records found', { cause: err }); } else if (err.errorNum === 1403) { throw new NotFoundError('Employee not found', { cause: err }); } else { throw err; } } ============================================================ Key points across all drivers: ============================================================ 1. Error code 1422 = ORA-01422 (TOO_MANY_ROWS) 2. Error code 1403 = ORA-01403 (NO_DATA_FOUND) 3. Handle each specifically β they require different responses 4. ORA-01422 is always HTTP 500 β it is a server-side data integrity failure 5. ORA-01403 may be HTTP 404 or HTTP 500 depending on business semantics 6. Preserve the Oracle error code through every abstraction layer 7. Log the full Oracle error message β it contains the diagnostic context
| Approach | Eliminates ORA-01422 | Fixes Root Cause | When to Use | Risk |
|---|---|---|---|---|
| Add unique constraint | Yes β prevents duplicates from existing | Yes β enforces uniqueness at the data level | When the business rule requires one row per key value β the correct fix in most cases | Requires deduplication first; bulk load workflows must re-enable with VALIDATE |
| Refine WHERE clause | Yes β query returns one row | Partial β stops this query from failing but does not prevent other queries from hitting duplicates | When the WHERE clause is genuinely missing a required filter condition | Other queries on the same table may still fail; does not remove existing duplicates |
| BULK COLLECT | Yes β no one-row contract | No β duplicates remain; caller must handle them | When multiple rows are a valid result and caller applies business logic to select the right one | Caller must implement correct selection logic; duplicates continue to exist |
| FETCH FIRST 1 ROWS ONLY with ORDER BY | Yes β limits result to one row | No β duplicates remain in the table | When you need the most recent or highest-priority row and cannot add a unique constraint immediately | Must include ORDER BY or result is non-deterministic; duplicates accumulate |
| ROWNUM = 1 without ORDER BY | Yes β limits result to one row | No β duplicates remain; result is arbitrary | Acceptable only for boolean EXISTS checks where the specific row returned does not matter | Non-deterministic result; hides the data model problem; fails silently in production |
| WHEN TOO_MANY_ROWS returning a default | No β exception is caught but problem persists | No β hides the problem entirely | Never appropriate | Application runs with arbitrarily selected or incorrect data indefinitely |
π― Key Takeaways
- ORA-01422 is always a bug β unlike ORA-01403, there is no valid business state where SELECT INTO should encounter unexpected duplicates
- When ORA-01422 appears across multiple procedures on the same table simultaneously, investigate the constraint status and duplicate count before reading any code
- The WHEN TOO_MANY_ROWS handler has one correct behavior: raise a diagnostic error with the duplicate key value. It should never return a default.
- ROWNUM = 1 without ORDER BY is non-deterministic β it returns an arbitrary row in production and appears to work consistently in testing, which is the worst possible property
- Check both status and validated in USER_CONSTRAINTS β a constraint with status = ENABLED and validated = NOT VALIDATED does not protect existing data
- Three-layer prevention: unique constraints block new duplicates, daily duplicate detection catches existing ones, constraint monitoring every five minutes catches the disable event before hours of accumulation
β Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat causes ORA-01422 and how is it different from ORA-01403?JuniorReveal
- QHow do you diagnose the root cause of a sudden ORA-01422 on code that worked for months?Mid-levelReveal
- QWhy is ROWNUM = 1 a dangerous fix for ORA-01422 and what should you use instead?Mid-levelReveal
- QWhat is the difference between ENABLE VALIDATE and ENABLE NOVALIDATE when re-enabling a unique constraint?SeniorReveal
- QHow would you design a monitoring strategy to prevent ORA-01422 in a production Oracle environment?SeniorReveal
Frequently Asked Questions
What is the difference between ORA-01422 and ORA-01403?
Both are SELECT INTO failures. ORA-01403 (no data found) fires when zero rows match the WHERE clause. ORA-01422 (too many rows) fires when two or more rows match. ORA-01403 can sometimes be a valid state β handle it with a default when the absence of data is expected. ORA-01422 is always a bug β it means duplicates exist where the data model requires uniqueness. Never handle ORA-01422 by returning a default.
How do I fix ORA-01422 permanently?
The permanent fix is a unique constraint on the column(s) used in the failing SELECT INTO WHERE clause. Deduplicate the existing data first β use ROW_NUMBER() OVER (PARTITION BY key_column ORDER BY created_date DESC) to identify and delete all rows except the most recent per key. Then add the constraint with ALTER TABLE table_name ADD CONSTRAINT constraint_name UNIQUE (column_name) ENABLE VALIDATE. Monitor constraint status going forward to ensure it is never disabled without immediate re-enablement.
My code added ROWNUM = 1 and ORA-01422 stopped. Is that sufficient?
No. ROWNUM = 1 without ORDER BY is non-deterministic β it returns an arbitrary row depending on the execution plan, not the business logic. It stops the exception from appearing while allowing the application to process incorrect data silently. In development, it appears consistent because insert order is deterministic. In production, concurrent DML makes it unpredictable. The data model problem β the duplicates, the missing constraint β is untouched. Add ORDER BY + FETCH FIRST 1 ROWS ONLY for deterministic selection, and add the unique constraint to prevent duplicates from accumulating.
How do I check if a unique constraint is actually enforcing uniqueness?
Query USER_CONSTRAINTS and check both columns: SELECT constraint_name, status, validated FROM user_constraints WHERE table_name = 'TABLE_NAME' AND constraint_type = 'U'. You need status = ENABLED and validated = VALIDATED. Status ENABLED with validated NOT VALIDATED means the constraint protects against new inserts but does not guarantee existing rows are unique. This state can exist after ENABLE NOVALIDATE or after a constraint was re-enabled without checking existing data. If you see NOT VALIDATED, run a duplicate detection query to confirm whether duplicates exist.
Why does ORA-01422 appear on code that was working for months?
The code almost certainly did not change. The data changed. The most common triggers: a bulk data load that disabled the unique constraint for performance and re-enabled it with NOVALIDATE instead of VALIDATE, allowing duplicates to persist; a new application feature or integration that inserts records without the deduplication the existing code assumed; a data migration that bypassed application-layer logic; or a direct database insert by a DBA or administrator account that skipped the normal insert path. Check constraint status and duplicate counts before reading code β the root cause is at the data level.
Can I use BULK COLLECT to avoid ORA-01422?
Yes β BULK COLLECT returns a collection that can hold any number of rows without raising an exception. Use it when multiple rows are a genuinely valid result and your code needs to process all of them or select the correct one using business logic. Use BULK COLLECT with ORDER BY and take the first element when you need the most recent or highest-priority match. Do not use BULK COLLECT purely to avoid ORA-01422 when the business rule actually requires uniqueness β in that case, fix the constraint and deduplicate the data. BULK COLLECT hides the problem; the constraint fixes it.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.