Skip to content
Home DSA Permutations Using Backtracking — Deep Dive with Java Examples

Permutations Using Backtracking — Deep Dive with Java Examples

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Greedy & Backtracking → Topic 8 of 13
Permutations using backtracking explained deeply — recursive tree, pruning, time complexity, swap-based vs visited-array, and real interview gotchas in Java.
🔥 Advanced — solid DSA foundation required
In this tutorial, you'll learn
Permutations using backtracking explained deeply — recursive tree, pruning, time complexity, swap-based vs visited-array, and real interview gotchas in Java.
  • Backtracking for permutations is always choose → explore → unchoose — remove the unchoose step and you silently corrupt state across branches with no compiler warning.
  • The time complexity is $O(N \times N!)$ not $O(N!)$ — the extra N factor comes from the $O(N)$ work to copy each completed permutation at the leaf nodes.
  • For duplicate inputs, sort first and skip a candidate when it equals its predecessor AND that predecessor is NOT currently in use — the NOT is critical and the most common point of confusion in interviews.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer

Imagine you're arranging 3 friends — Alice, Bob, and Carol — in a line for a photo. You try every possible order: ABC, ACB, BAC, BCA, CAB, CBA. Backtracking is exactly this trial-and-error process made systematic: you place one person, then try all options for the next spot, and when you hit a dead end (everyone's placed), you step back and swap in someone else. The 'backtrack' is literally the moment you say 'nope, let's undo that and try the next option.' That's the whole idea.

Permutations show up everywhere in computing — from scheduling jobs on a processor, to cracking password combinations, to solving the Travelling Salesman Problem where you're testing every route. Any time you need to explore every possible ordering of a set of items, you're in permutation territory. The naive approach of nested loops collapses instantly once your input grows beyond 3 or 4 items, which is exactly why a systematic, recursive strategy is non-negotiable.

Backtracking solves this elegantly by building each permutation one element at a time, making a choice, recursing into it, and then undoing that choice to explore the next branch. It's a depth-first traversal of a decision tree where each level represents filling one position and each branch represents which element goes there. The magic is the 'undo' step — without it, you'd corrupt state between branches and get garbage results. Backtracking keeps the state clean by design.

By the end of this article you'll be able to implement two distinct backtracking strategies for permutations (visited-array and in-place swap), understand exactly what happens on the call stack at each recursive depth, identify the performance cliffs and memory costs, avoid the three mistakes that trip up even experienced engineers in interviews, and answer the follow-up questions interviewers use to separate candidates who memorised the solution from those who truly understand it.

The Recursive Decision Tree — How Backtracking Builds Every Permutation

Before touching code, you need a mental model of what the algorithm is actually doing. Think of it as a tree. At the root, you have an empty arrangement and all N elements available. At depth 1 you pick the first element — N choices. At depth 2 you pick the second from the remaining N-1 — and so on. The leaves of this tree, all at depth N, are the complete permutations. There are exactly N! of them.

The critical insight: you don't build this tree in memory all at once. You walk it depth-first, keeping only the current path (the permutation being built) in memory at any moment. When you reach a leaf, you record that permutation. When you backtrack up to a node, you restore the path to what it was before you descended — that restoration is the 'undo' step.

For N=3 with elements [1, 2, 3], the tree has 3 choices at level 0, 2 at level 1, and 1 at level 2, yielding 3×2×1 = 6 leaves. The total number of nodes in this tree (all internal + all leaves) is N! + (N-1)! + ... + 1!, which for N=3 is 6+2+1 = 9 nodes visited. That's why the time complexity is O(N × N!) — you visit O(N!) nodes and do O(N) work at each leaf to copy the result.

io/thecodeforge/algorithms/PermutationDecisionTree.java · JAVA
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
package io.thecodeforge.algorithms;

import java.util.ArrayList;
import java.util.List;

/**
 * Approach 1: Visited-Array Backtracking
 *
 * Uses a boolean[] to track which elements are already placed
 * in the current partial permutation. At each recursive call,
 * we iterate over all elements, skip used ones, place the next
 * available element, recurse, then UNDO the placement.
 *
 * Time:  O(N * N!) — N! permutations, O(N) to copy each one
 * Space: O(N)      — recursion depth + visited array + current path
 */
public class PermutationDecisionTree {

    public static List<List<Integer>> generatePermutations(int[] inputElements) {
        List<List<Integer>> allPermutations = new ArrayList<>();
        boolean[] isElementUsed = new boolean[inputElements.length]; // tracks which slots are occupied
        List<Integer> currentPermutation = new ArrayList<>();

        backtrack(inputElements, isElementUsed, currentPermutation, allPermutations);
        return allPermutations;
    }

    private static void backtrack(
            int[] inputElements,
            boolean[] isElementUsed,
            List<Integer> currentPermutation,
            List<List<Integer>> allPermutations) {

        // BASE CASE: current permutation has all N elements — it's complete
        if (currentPermutation.size() == inputElements.length) {
            // IMPORTANT: copy the list, not the reference!
            // If you add currentPermutation directly, all stored results
            // will point to the same list and reflect its final (empty) state.
            allPermutations.add(new ArrayList<>(currentPermutation));
            return;
        }

        // RECURSIVE CASE: try placing each unused element at the next position
        for (int index = 0; index < inputElements.length; index++) {

            if (isElementUsed[index]) {
                continue; // skip elements already in the current path
            }

            // --- CHOOSE ---
            isElementUsed[index] = true;                    // mark as used
            currentPermutation.add(inputElements[index]);   // place element

            // --- EXPLORE ---
            backtrack(inputElements, isElementUsed, currentPermutation, allPermutations);

            // --- UNCHOOSE (backtrack) ---
            // Remove the last element we added — restoring state for the next branch
            currentPermutation.remove(currentPermutation.size() - 1);
            isElementUsed[index] = false; // mark as available again
        }
    }

    public static void main(String[] args) {
        int[] elements = {1, 2, 3};
        List<List<Integer>> result = generatePermutations(elements);

        System.out.println("Total permutations: " + result.size()); // 3! = 6
        for (List<Integer> permutation : result) {
            System.out.println(permutation);
        }
    }
}
▶ Output
Total permutations: 6
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
⚠ Watch Out: The Reference Trap
Never do allPermutations.add(currentPermutation). You must do allPermutations.add(new ArrayList<>(currentPermutation)). The backtracking algorithm mutates currentPermutation in place, so if you store the reference, every entry in your result list will point to the same, eventually-empty list. This is the #1 mistake in coding interviews and produces an output of empty lists that stumps people for 20 minutes.

In-Place Swap Backtracking — Faster, Lower Memory, Trickier to Reason About

The visited-array approach is clean and easy to reason about, but it has overhead: a boolean array, an extra ArrayList for the current path, and an O(N) copy at every leaf. The in-place swap approach eliminates the boolean array entirely and works directly on the input array — it's the technique Heap's algorithm is derived from.

The idea: treat the array as two regions — a 'fixed' prefix (indices 0 to start-1) and a 'remaining' suffix (indices start to end). At each recursive call, you try swapping each element in the remaining region into position start, recurse with start+1, then swap back to restore the array for the next iteration.

When start equals the array length, you have a complete permutation sitting in the array itself — no copying of a separate list needed. You just snapshot the array at that moment.

The trade-off: the output order is different from the visited approach (it's not lexicographic), and the swap-back step is subtle. If you forget to swap back, you silently corrupt future branches and get duplicates or wrong values — no exception thrown, just wrong answers. This is a classic silent bug that's hard to catch without a solid test suite.

io/thecodeforge/algorithms/InPlaceSwapPermutation.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
package io.thecodeforge.algorithms;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

/**
 * Approach 2: In-Place Swap Backtracking
 *
 * Works directly on the input array. No visited[] array needed.
 * The 'fixed' prefix grows by one element per recursive level.
 * At each level, swap each remaining element into the current
 * position, recurse, then swap back to restore array state.
 *
 * Time:  O(N * N!) — same asymptotic complexity
 * Space: O(N)      — only the recursion stack depth (no extra path list)
 */
public class InPlaceSwapPermutation {

    public static List<int[]> generatePermutations(int[] inputArray) {
        List<int[]> allPermutations = new ArrayList<>();
        // Work on a copy so we don't mutate the caller's array
        int[] workingArray = Arrays.copyOf(inputArray, inputArray.length);
        swapBacktrack(workingArray, 0, allPermutations);
        return allPermutations;
    }

    /**
     * @param workingArray  the array being rearranged in place
     * @param startIndex    the current position we're filling (left partition boundary)
     * @param allPermutations accumulator for completed permutations
     */
    private static void swapBacktrack(
            int[] workingArray,
            int startIndex,
            List<int[]> allPermutations) {

        // BASE CASE: startIndex has reached the end — the full array is one permutation
        if (startIndex == workingArray.length) {
            // Snapshot the current state of the array (arrays are mutable — copy it!)
            allPermutations.add(Arrays.copyOf(workingArray, workingArray.length));
            return;
        }

        // Try placing each element from [startIndex .. end] at position startIndex
        for (int swapCandidate = startIndex; swapCandidate < workingArray.length; swapCandidate++) {

            // --- CHOOSE: bring swapCandidate into the fixed prefix ---
            swap(workingArray, startIndex, swapCandidate);

            // --- EXPLORE: fix startIndex, recurse on the rest ---
            swapBacktrack(workingArray, startIndex + 1, allPermutations);

            // --- UNCHOOSE: restore the array to its pre-swap state ---
            // Without this, subsequent iterations in this loop see a corrupted array.
            swap(workingArray, startIndex, swapCandidate);
        }
    }

    private static void swap(int[] array, int firstIndex, int secondIndex) {
        int temp = array[firstIndex];
        array[firstIndex] = array[secondIndex];
        array[secondIndex] = temp;
    }

    public static void main(String[] args) {
        int[] elements = {1, 2, 3};
        List<int[]> result = generatePermutations(elements);

        System.out.println("Total permutations: " + result.size()); // 3! = 6
        for (int[] permutation : result) {
            System.out.println(Arrays.toString(permutation));
        }

        // Demonstrate that the ORIGINAL array is unmodified
        System.out.println("Original array after call: " + Arrays.toString(elements));
    }
}
▶ Output
Total permutations: 6
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
Original array after call: [1, 2, 3]
💡Pro Tip: The Missing Swap-Back Is a Silent Bug
If you remove the second swap() call (the undo), your code still compiles and runs — it just produces subtly wrong results. Write a test that checks all 6 permutations of [1,2,3] against the known expected set. Silent mutation bugs like this are why pure functional approaches (returning new arrays) are easier to test, even if slightly less efficient.

Handling Duplicates — When Your Input Has Repeated Elements

Both approaches above assume all input elements are distinct. Feed [1, 1, 2] to either and you'll get 6 permutations with duplicates like [1, 1, 2] appearing twice. The correct answer is 3!/2! = 3 unique permutations: [1,1,2], [1,2,1], [2,1,1].

The fix for the visited-array approach is to sort the input first and then skip a candidate if it equals its predecessor AND that predecessor is not currently in use. This condition — isElementUsed[index-1] == false — is the key. It means: 'we already fully explored the branch where the previous identical element was chosen first, so don't start another branch in the same position.' Skipping when the predecessor IS used would be wrong — that's a different path in the tree.

For the swap approach, duplicates are trickier. The standard fix is to track which values have been placed at each recursive level using a HashSet per call frame. Before swapping a candidate into position, check whether a value equal to it has already been placed at this level — if so, skip it.

Understanding this distinction — same value, not same index — is what separates a deep understanding of backtracking from pattern-matching the template.

io/thecodeforge/algorithms/PermutationWithDuplicates.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
package io.thecodeforge.algorithms;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

/**
 * Permutations with duplicate input elements.
 *
 * Key technique: sort the input first, then skip a candidate
 * if it equals the previous element AND the previous element
 * is NOT currently in the active path (isElementUsed[i-1] == false).
 */
public class PermutationWithDuplicates {

    public static List<List<Integer>> generateUniquePermutations(int[] inputElements) {
        // Sorting is REQUIRED — it groups duplicates together so our
        // adjacent-skip logic works correctly.
        Arrays.sort(inputElements);

        List<List<Integer>> uniquePermutations = new ArrayList<>();
        boolean[] isElementUsed = new boolean[inputElements.length];
        List<Integer> currentPermutation = new ArrayList<>();

        backtrackUnique(inputElements, isElementUsed, currentPermutation, uniquePermutations);
        return uniquePermutations;
    }

    private static void backtrackUnique(
            int[] inputElements,
            boolean[] isElementUsed,
            List<Integer> currentPermutation,
            List<List<Integer>> uniquePermutations) {

        if (currentPermutation.size() == inputElements.length) {
            uniquePermutations.add(new ArrayList<>(currentPermutation));
            return;
        }

        for (int index = 0; index < inputElements.length; index++) {

            // Skip elements already placed in the current path
            if (isElementUsed[index]) {
                continue;
            }

            // DUPLICATE PRUNING: skip this element if it's identical to the
            // previous one AND the previous one is not currently in use.
            if (index > 0
                    && inputElements[index] == inputElements[index - 1]
                    && !isElementUsed[index - 1]) {
                continue;
            }

            isElementUsed[index] = true;
            currentPermutation.add(inputElements[index]);

            backtrackUnique(inputElements, isElementUsed, currentPermutation, uniquePermutations);

            currentPermutation.remove(currentPermutation.size() - 1);
            isElementUsed[index] = false;
        }
    }

    public static void main(String[] args) {
        int[] elementsWithDuplicates = {1, 1, 2};
        List<List<Integer>> result = generateUniquePermutations(elementsWithDuplicates);

        System.out.println("Unique permutations: " + result.size());
        for (List<Integer> permutation : result) {
            System.out.println(permutation);
        }
    }
}
▶ Output
Unique permutations: 3
[1, 1, 2]
[1, 2, 1]
[2, 1, 1]
🔥Interview Gold: The Pruning Condition Explained
Interviewers love asking WHY the condition is !isElementUsed[index-1] and not isElementUsed[index-1]. The answer: when the previous duplicate IS used, it's on a different branch — different prefix — so the resulting permutations are genuinely distinct. When it ISN'T used, we're choosing between two identical elements at the same tree level, which would produce duplicate subtrees. Sorting + this condition prunes those duplicate subtrees before they're ever explored.

Performance Cliffs, Memory Costs, and Production Considerations

N! grows catastrophically fast. At N=10 you have 3,628,800 permutations. At N=12 it's 479,001,600. At N=13 you're over 6 billion. If you're storing all permutations in a List, the memory cost is O(N × N!) — for N=12 that's roughly 479 million integers. On a 64-bit JVM with ArrayList overhead, you're looking at gigabytes. This is not a production-safe pattern for large N.

In practice, if a downstream process only needs to consume one permutation at a time, prefer a callback (consumer) pattern or a lazy iterator/generator pattern. Pass a Consumer<int[]> into the backtracking function and call it at the leaf instead of building a result list. This keeps memory at O(N) regardless of N!.

For N up to about 8-9, the visited-array approach with ArrayList is fine. For N up to 12-13 in performance-critical code, the in-place swap approach with a consumer callback is your best bet — you avoid all list allocations except one O(N) copy per permutation leaf. For N beyond 13, you're almost certainly solving the wrong problem — reconsider whether you need all permutations or just a subset satisfying some constraint, and add early pruning.

io/thecodeforge/algorithms/PermutationWithConsumer.java · JAVA
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
package io.thecodeforge.algorithms;

import java.util.Arrays;
import java.util.function.Consumer;

/**
 * Memory-efficient permutation generator using a Consumer callback.
 *
 * Instead of accumulating all N! permutations in a list (which blows
 * up memory for N > 11), we call a callback at each leaf.
 */
public class PermutationWithConsumer {

    public static void streamPermutations(int[] inputArray, Consumer<int[]> onPermutationFound) {
        int[] workingArray = Arrays.copyOf(inputArray, inputArray.length);
        swapBacktrackWithConsumer(workingArray, 0, onPermutationFound);
    }

    private static void swapBacktrackWithConsumer(
            int[] workingArray,
            int startIndex,
            Consumer<int[]> onPermutationFound) {

        if (startIndex == workingArray.length) {
            onPermutationFound.accept(Arrays.copyOf(workingArray, workingArray.length));
            return;
        }

        for (int candidate = startIndex; candidate < workingArray.length; candidate++) {
            swap(workingArray, startIndex, candidate);
            swapBacktrackWithConsumer(workingArray, startIndex + 1, onPermutationFound);
            swap(workingArray, startIndex, candidate);
        }
    }

    private static void swap(int[] array, int i, int j) {
        int temp = array[i];
        array[i] = array[j];
        array[j] = temp;
    }

    public static void main(String[] args) {
        int[] elements = {1, 2, 3, 4};
        int[] count = {0};
        streamPermutations(elements, perm -> count[0]++);
        System.out.println("Total permutations processed: " + count[0]);
    }
}
▶ Output
Total permutations processed: 24
⚠ Watch Out: Early Exit From Backtracking
The consumer approach doesn't natively support early exit — the recursion runs to completion regardless. If you genuinely need to stop after finding the first valid permutation, thread a volatile boolean flag through the recursive calls and check it at the top of every recursive frame. Some engineers throw a custom RuntimeException for early exit (catching it in the public method) — it's a valid trick but ugly. For interview code, the flag approach is cleaner and easier to explain.
AspectVisited-Array ApproachIn-Place Swap Approach
Extra data structuresboolean[] + ArrayList pathNone — works on input array
Output orderLexicographic (if input sorted)Non-lexicographic
Memory per call frameO(N) path list + O(N) visitedO(1) — only swap indices
Handles duplicates easily?Yes — sort + skip-predecessor logicHarder — needs per-level HashSet
Code readabilityVery clear choose/unchoose idiomSubtle — swap-back easy to forget
Best for N up to~10 with list storage~13 with consumer callback
Copying overheadO(N) copy at each leafO(N) copy at each leaf (same)
Interviewers expectThis form — clearest to explainAs a follow-up optimisation

🎯 Key Takeaways

  • Backtracking for permutations is always choose → explore → unchoose — remove the unchoose step and you silently corrupt state across branches with no compiler warning.
  • The time complexity is $O(N \times N!)$ not $O(N!)$ — the extra N factor comes from the $O(N)$ work to copy each completed permutation at the leaf nodes.
  • For duplicate inputs, sort first and skip a candidate when it equals its predecessor AND that predecessor is NOT currently in use — the NOT is critical and the most common point of confusion in interviews.
  • For N > 12, never store all permutations in a list — use a consumer/callback pattern to keep memory at $O(N)$ and process permutations lazily one at a time.

⚠ Common Mistakes to Avoid

    Storing the list reference instead of a copy — allPermutations.add(currentPermutation) instead of allPermutations.add(new ArrayList<>(currentPermutation)) — all stored results end up pointing to the same empty list after backtracking completes. Fix: always snapshot with new ArrayList<>(currentPermutation) at the base case.
    Fix

    always snapshot with new ArrayList<>(currentPermutation) at the base case.

    Forgetting the swap-back (undo) step in the in-place approach — the code compiles and runs, silently producing duplicate or incorrect permutations because subsequent iterations in the for-loop see a corrupted array. Fix: every swap(start, candidate) before the recursive call must be mirrored by an identical swap(start, candidate) immediately after it.
    Fix

    every swap(start, candidate) before the recursive call must be mirrored by an identical swap(start, candidate) immediately after it.

    Applying the duplicate-pruning condition without sorting first — the skip-if-equal-to-predecessor logic only works because duplicates are adjacent after sorting; on an unsorted array, identical values at non-adjacent indices are never compared and duplicates slip through. Fix: always call Arrays.sort(inputElements) before the first backtrack call when duplicates are possible.
    Fix

    always call Arrays.sort(inputElements) before the first backtrack call when duplicates are possible.

Interview Questions on This Topic

  • QWhat is the time and space complexity of generating all permutations via backtracking, and why is the time complexity $O(N \times N!)$ rather than just $O(N!)$?
  • QGiven an array that may contain duplicate elements, how would you modify the standard backtracking permutation algorithm to return only unique permutations? Walk me through the pruning condition and explain why sorting is a prerequisite.
  • QExplain the 'Choice, Explore, Unchoose' pattern in the context of the in-place swap method. Why is the second swap essential for correctness in depth-first exploration?
  • QHow would you handle a situation where N=15 and you cannot store all permutations in memory? Describe the Consumer/Iterator pattern and how it affects the space complexity.
  • QWhat is the difference between backtracking for permutations and backtracking for combinations? How does the 'start index' constraint change the tree traversal?

Frequently Asked Questions

What is the difference between permutation and combination in backtracking?

Permutations care about order — [1,2,3] and [3,2,1] are different. Combinations don't — {1,2,3} and {3,2,1} are the same set. In backtracking terms, permutation algorithms try every element at every position, while combination algorithms only try elements that come after the last chosen index, preventing reordering of the same subset.

Why do we sort the array before generating permutations with duplicates?

Sorting groups identical values adjacently, which makes the 'skip if equal to predecessor' pruning condition work correctly. Without sorting, two identical values could sit at non-adjacent indices and never trigger the comparison, allowing duplicate permutation subtrees to be explored. Sorting is a mandatory precondition for the pruning logic, not an optional optimisation.

Does backtracking actually build all N! permutations in memory at once?

No — and this is a crucial point. At any moment, backtracking only holds the current partial permutation ($O(N)$ space) and the call stack ($O(N)$ depth). The result list grows to $O(N \times N!)$ only if you explicitly accumulate all results. If you use a consumer callback pattern instead, total memory stays $O(N)$ throughout the entire execution.

What is the time complexity of the swap-based vs visited-array approach?

Both have a time complexity of $O(N \times N!)$. While the swap-based approach avoids the overhead of a boolean array and extra list maintenance, it still visits the same number of nodes in the recursion tree and requires an $O(N)$ copy of the array at each leaf node to store the final result.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousRat in a Maze ProblemNext →Fractional Knapsack Problem — Greedy Approach
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged