Home DSA Quick Sort Explained: How It Works, Why It's Fast, and When to Use It

Quick Sort Explained: How It Works, Why It's Fast, and When to Use It

In Plain English 🔥
Imagine you're sorting a pile of birthday cards by age. You pick one card at random and put every younger person's card to its left, every older person's card to its right — now that one card is in its perfect final spot. You repeat this trick on the left pile and the right pile separately. Each time you do it, more cards lock into place. That's Quick Sort: keep picking a card, keep splitting, until every card is exactly where it belongs.
⚡ Quick Answer
Imagine you're sorting a pile of birthday cards by age. You pick one card at random and put every younger person's card to its left, every older person's card to its right — now that one card is in its perfect final spot. You repeat this trick on the left pile and the right pile separately. Each time you do it, more cards lock into place. That's Quick Sort: keep picking a card, keep splitting, until every card is exactly where it belongs.

Every time you search for a product on Amazon, sort a playlist by play count, or watch a leaderboard update in real time, a sorting algorithm is doing invisible heavy lifting. Quick Sort is one of the most widely deployed sorting algorithms in the world — it powers the default sort in many standard libraries (including older versions of Java's Arrays.sort for primitives) and consistently outperforms its rivals on real hardware. Understanding it isn't just academic; it directly shapes how you write high-performance code.

The problem Quick Sort solves is deceptively simple: given an unordered collection, arrange its elements in order as fast as possible. What makes sorting hard isn't the goal — it's the cost. Naive approaches like Bubble Sort touch every pair of elements, which becomes catastrophically slow as data grows. Quick Sort instead uses a divide-and-conquer strategy: it shrinks the problem recursively so that each step does just enough work to guarantee permanent progress, achieving an average-case time complexity of O(n log n).

By the end of this article you'll understand exactly how Quick Sort's partitioning step works (not just what it does), why pivot choice is the difference between blazing speed and worst-case disaster, how to implement it cleanly in Java with a Lomuto partition scheme, and — critically — when you should reach for it versus Merge Sort or Tim Sort in production code.

The Partition Step: The One Idea That Makes Everything Click

Quick Sort has one core insight: if you can place a single element — the pivot — into its exact final sorted position in one pass, you've permanently reduced your problem. Everything to the left of the pivot is smaller; everything to the right is larger. Neither side needs to know about the other. That's the partition step, and it's the whole algorithm.

The Lomuto partition scheme (the cleanest version for learning) walks a pointer storeIndex through the array. Whenever it finds an element smaller than or equal to the pivot, it swaps that element forward past the boundary. At the end, the pivot swaps into storeIndex. The pivot is now home — forever.

Why does this work so well? Because it's in-place. No extra array needed. Every swap is local. Modern CPUs love this because the data stays in cache. Merge Sort is theoretically equivalent at O(n log n), but it needs O(n) extra memory and its cache access pattern is less friendly. Quick Sort wins in practice because the hardware rewards it.

The recursive calls on the left and right sub-arrays are then completely independent. They don't share state. That independence is also why Quick Sort parallelises beautifully — each partition can run on a separate thread.

QuickSortLomuto.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
public class QuickSortLomuto {

    /**
     * Entry point — sorts the array in-place between indices 'low' and 'high'.
     * We pass indices rather than creating sub-arrays, so no extra memory is needed.
     */
    public static void quickSort(int[] numbers, int low, int high) {
        // Base case: a sub-array of 0 or 1 elements is already sorted — nothing to do.
        if (low >= high) return;

        // Partition the array and get back the pivot's final resting index.
        int pivotFinalIndex = partition(numbers, low, high);

        // Recursively sort everything to the LEFT of the pivot (pivot itself is already placed).
        quickSort(numbers, low, pivotFinalIndex - 1);

        // Recursively sort everything to the RIGHT of the pivot.
        quickSort(numbers, pivotFinalIndex + 1, high);
    }

    /**
     * Lomuto Partition Scheme.
     * Chooses the LAST element as the pivot, then rearranges the sub-array so that:
     *   - All elements <= pivot are to the left of storeIndex
     *   - All elements >  pivot are to the right of storeIndex
     * Returns storeIndex — the pivot's correct final position.
     */
    private static int partition(int[] numbers, int low, int high) {
        int pivot = numbers[high]; // We always pick the last element as pivot (simple but has pitfalls — see Gotchas)
        int storeIndex = low - 1; // storeIndex tracks the boundary between "small" and "large" elements

        for (int scanner = low; scanner < high; scanner++) {
            if (numbers[scanner] <= pivot) {
                storeIndex++;  // Expand the "small" region by one slot
                swap(numbers, storeIndex, scanner); // Pull this small element into the small region
            }
        }

        // Place the pivot right after all the small elements — this is its final position.
        swap(numbers, storeIndex + 1, high);

        return storeIndex + 1; // Return the pivot's final index so quickSort knows where to split
    }

    private static void swap(int[] numbers, int i, int j) {
        int temp = numbers[i];
        numbers[i] = numbers[j];
        numbers[j] = temp;
    }

    public static void main(String[] args) {
        int[] scores = {42, 17, 93, 5, 61, 38, 72, 29};

        System.out.println("Before sorting:");
        printArray(scores);

        quickSort(scores, 0, scores.length - 1);

        System.out.println("After sorting:");
        printArray(scores);
    }

    private static void printArray(int[] numbers) {
        System.out.print("[ ");
        for (int number : numbers) System.out.print(number + " ");
        System.out.println("]");
    }
}
▶ Output
Before sorting:
[ 42 17 93 5 61 38 72 29 ]
After sorting:
[ 5 17 29 38 42 61 72 93 ]
🔥
Why in-place matters:Quick Sort sorts directly inside the original array using only O(log n) stack space for recursion. Merge Sort needs an extra O(n) buffer array. For a 1-million-element array of 4-byte ints, that's 4 MB you don't have to allocate — and your GC will thank you.

Pivot Choice: Why 'Last Element' Is a Trap on Sorted Data

The pivot choice is the single biggest lever you have over Quick Sort's performance. Pick well, and you split the array roughly in half every time — O(n log n). Pick badly, and every partition produces one empty side and one side with n-1 elements — that's O(n²), as slow as Bubble Sort.

The worst case for 'always pick last element' is a sorted or reverse-sorted array. Every partition selects the largest (or smallest) remaining element, the splits are maximally unbalanced, and you make n recursive calls instead of log n. This is a real production bug, not a theoretical one — pre-sorted input is extremely common in real data.

The fix used in production is the median-of-three strategy: look at the first, middle, and last elements of the sub-array, pick the median value as your pivot, and swap it to the last position before running Lomuto as normal. This dramatically reduces the probability of worst-case behaviour on sorted, reverse-sorted, or nearly-sorted data.

Java's Arrays.sort for primitive types uses a Dual-Pivot Quick Sort (introduced in Java 7), which picks two pivots and creates three partitions. It's even harder to degrade to O(n²) and squeezes better cache performance out of modern CPUs. The lesson: the partition scheme is just a starting point. Production implementations have 30+ years of refinement baked in.

QuickSortMedianOfThree.java · JAVA
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
public class QuickSortMedianOfThree {

    public static void quickSort(int[] numbers, int low, int high) {
        if (low >= high) return;

        // Choose a smarter pivot BEFORE partitioning
        choosePivotMedianOfThree(numbers, low, high);

        int pivotFinalIndex = partition(numbers, low, high);
        quickSort(numbers, low, pivotFinalIndex - 1);
        quickSort(numbers, pivotFinalIndex + 1, high);
    }

    /**
     * Median-of-Three Pivot Selection.
     * Looks at the first, middle, and last elements.
     * Rearranges so the median value ends up at numbers[high] — ready for Lomuto.
     * This prevents O(n²) behaviour on already-sorted input.
     */
    private static void choosePivotMedianOfThree(int[] numbers, int low, int high) {
        int mid = low + (high - low) / 2; // Avoids integer overflow (important for large arrays)

        // Sort just these three candidates so numbers[mid] holds the median
        if (numbers[low] > numbers[mid])  swap(numbers, low, mid);
        if (numbers[low] > numbers[high]) swap(numbers, low, high);
        if (numbers[mid] > numbers[high]) swap(numbers, mid, high);

        // Now numbers[high] is the median of the three — perfect Lomuto pivot
    }

    private static int partition(int[] numbers, int low, int high) {
        int pivot = numbers[high];
        int storeIndex = low - 1;

        for (int scanner = low; scanner < high; scanner++) {
            if (numbers[scanner] <= pivot) {
                storeIndex++;
                swap(numbers, storeIndex, scanner);
            }
        }

        swap(numbers, storeIndex + 1, high);
        return storeIndex + 1;
    }

    private static void swap(int[] numbers, int i, int j) {
        int temp = numbers[i];
        numbers[i] = numbers[j];
        numbers[j] = temp;
    }

    public static void main(String[] args) {
        // This input would cause O(n²) with naive last-element pivot — median-of-three handles it fine
        int[] alreadySorted = {10, 20, 30, 40, 50, 60, 70, 80, 90, 100};

        System.out.println("Sorting already-sorted data (Quick Sort's classic nemesis):");
        System.out.print("Before: ");
        printArray(alreadySorted);

        quickSort(alreadySorted, 0, alreadySorted.length - 1);

        System.out.print("After:  ");
        printArray(alreadySorted);

        // Also demonstrate on random data
        int[] playerScores = {88, 34, 67, 12, 99, 45, 23, 78, 56, 11};
        System.out.println("\nSorting game leaderboard scores:");
        System.out.print("Before: ");
        printArray(playerScores);

        quickSort(playerScores, 0, playerScores.length - 1);

        System.out.print("After:  ");
        printArray(playerScores);
    }

    private static void printArray(int[] numbers) {
        System.out.print("[ ");
        for (int n : numbers) System.out.print(n + " ");
        System.out.println("]");
    }
}
▶ Output
Sorting already-sorted data (Quick Sort's classic nemesis):
Before: [ 10 20 30 40 50 60 70 80 90 100 ]
After: [ 10 20 30 40 50 60 70 80 90 100 ]

Sorting game leaderboard scores:
Before: [ 88 34 67 12 99 45 23 78 56 11 ]
After: [ 11 12 23 34 45 56 67 78 88 99 ]
⚠️
Watch Out: mid = (low + high) / 2 overflowsIf low and high are both close to Integer.MAX_VALUE, adding them together wraps around to a negative number. Always compute the midpoint as low + (high - low) / 2. This is a real bug that has appeared in production binary search and Quick Sort implementations.

Quick Sort vs Merge Sort: Choosing the Right Tool for the Job

Quick Sort and Merge Sort both hit O(n log n) average case, so why does it matter which you pick? Because 'average case' hides a lot of nuance that matters enormously in production.

Quick Sort is faster in practice for most general-purpose in-memory sorting because its inner loop does very little work per comparison, it accesses memory sequentially (cache-friendly), and it needs no extra allocation. Its weak spot is its worst case: a bad pivot strategy on degenerate input can hit O(n²) — and unlike Merge Sort's worst case, this can actually happen with real data.

Merge Sort is the right choice when stability matters. A stable sort preserves the original relative order of equal elements. This is crucial when you're sorting objects by one field and the objects already have a meaningful order by another field — like sorting employees by department when they're already ordered by hire date. Merge Sort is always O(n log n), no exceptions.

Java's Collections.sort (and Arrays.sort for objects) uses Tim Sort — a hybrid of Merge Sort and Insertion Sort that is stable, adaptive to partially-sorted data, and O(n log n) worst case. Use it by default. Only reach for a custom Quick Sort when you're sorting primitives, profiling shows a bottleneck, and you understand your data's distribution.

SortingComparison.java · JAVA
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
import java.util.Arrays;
import java.util.Comparator;

public class SortingComparison {

    // A real-world object: an employee with two sortable fields
    static class Employee {
        String name;
        String department;
        int hireOrder; // Lower = hired earlier (already sorted by this field)

        Employee(String name, String department, int hireOrder) {
            this.name = name;
            this.department = department;
            this.hireOrder = hireOrder;
        }

        @Override
        public String toString() {
            return name + "(" + department + ", hire#" + hireOrder + ")";
        }
    }

    public static void main(String[] args) {
        // Employees are already ordered by hireOrder (1 through 6)
        Employee[] employees = {
            new Employee("Alice",   "Engineering", 1),
            new Employee("Bob",     "Marketing",   2),
            new Employee("Carol",   "Engineering", 3),
            new Employee("David",   "Marketing",   4),
            new Employee("Eve",     "Engineering", 5),
            new Employee("Frank",   "Marketing",   6)
        };

        System.out.println("Original order (sorted by hire date):");
        printEmployees(employees);

        // Sort by department using Arrays.sort — which uses TimSort (STABLE) for objects
        Arrays.sort(employees, Comparator.comparing(e -> e.department));

        System.out.println("\nAfter sorting by department (TimSort — STABLE):");
        printEmployees(employees);
        // Notice: within Engineering, Alice(1) still comes before Carol(3) before Eve(5)
        // Their hire order is PRESERVED. This is what stable sort guarantees.

        System.out.println("\nWithin each department, hire order is preserved — that's stability in action.");

        // For primitives, Arrays.sort uses Dual-Pivot QuickSort (NOT stable, but very fast)
        int[] responseTimes = {120, 45, 300, 89, 12, 456, 67, 23};
        System.out.println("\nAPI response times (ms) before sort: " + Arrays.toString(responseTimes));
        Arrays.sort(responseTimes); // Dual-Pivot QuickSort under the hood
        System.out.println("API response times (ms) after sort:  " + Arrays.toString(responseTimes));
    }

    private static void printEmployees(Employee[] employees) {
        for (Employee e : employees) System.out.print(e + "  ");
        System.out.println();
    }
}
▶ Output
Original order (sorted by hire date):
Alice(Engineering, hire#1) Bob(Marketing, hire#2) Carol(Engineering, hire#3) David(Marketing, hire#4) Eve(Engineering, hire#5) Frank(Marketing, hire#6)

After sorting by department (TimSort — STABLE):
Alice(Engineering, hire#1) Carol(Engineering, hire#3) Eve(Engineering, hire#5) Bob(Marketing, hire#2) David(Marketing, hire#4) Frank(Marketing, hire#6)

Within each department, hire order is preserved — that's stability in action.

API response times (ms) before sort: [120, 45, 300, 89, 12, 456, 67, 23]
API response times (ms) after sort: [12, 23, 45, 67, 89, 120, 300, 456]
⚠️
Interview Gold:When an interviewer asks 'why would you use Merge Sort over Quick Sort?' — the answer isn't just 'guaranteed O(n log n)'. The real answer is stability. If you need equal elements to retain their original relative order (multi-key sorts, stable UI renders, database operations), Merge Sort or Tim Sort is the correct choice, full stop.
AspectQuick SortMerge Sort
Average Time ComplexityO(n log n)O(n log n)
Worst Case Time ComplexityO(n²) — bad pivot choiceO(n log n) — always
Space ComplexityO(log n) — stack frames onlyO(n) — needs extra buffer array
Stable Sort?No — equal elements may reorderYes — equal elements keep original order
Cache PerformanceExcellent — sequential accessModerate — merging jumps between arrays
Best ForLarge primitive arrays, in-memory sortsLinked lists, external sorts, stable sorting needed
Used In JavaArrays.sort(int[]) — Dual-Pivot variantCollections.sort() / Arrays.sort(Object[]) — TimSort hybrid
Parallelisable?Yes — partitions are independentYes — but merge step requires synchronisation

🎯 Key Takeaways

  • The partition step is the heart of Quick Sort: it places one element — the pivot — into its permanent final position in a single O(n) pass, then recurses on two independent halves. Understand partition and you understand everything.
  • Pivot choice determines real-world performance. Always-last-element works on textbook examples but degrades to O(n²) on sorted input. Use median-of-three or random pivot selection in any code that touches production data.
  • Quick Sort is fast in practice because it's in-place (O(log n) space) and cache-friendly — not just because of its O(n log n) average complexity. Merge Sort matches it asymptotically but pays an O(n) memory tax and has worse cache behaviour.
  • Quick Sort is not stable. For sorting objects where equal elements must preserve their original order — multi-key database sorts, UI rendering, any scenario with secondary sort keys — use Tim Sort (Java's default for object arrays) or Merge Sort instead.

⚠ Common Mistakes to Avoid

  • Mistake 1: Always using the last element as pivot on real-world data — If the input is already sorted (or nearly sorted), every partition produces a split of size 0 and size n-1, degrading to O(n²). You won't notice this in unit tests with random data. Fix: Use median-of-three pivot selection or randomly shuffle the array before sorting. Java's Arrays.sort avoids this by using a Dual-Pivot strategy with built-in fallback to Heap Sort when recursion depth gets too high.
  • Mistake 2: Forgetting the base case or writing it as low == high — If you use == instead of >=, you miss the case where low > high (which happens when a partition produces an empty side). The recursive call will proceed with invalid indices and either return garbage or throw an ArrayIndexOutOfBoundsException. Fix: Always write if (low >= high) return; — this handles both the single-element case and the empty sub-array case correctly.
  • Mistake 3: Applying Quick Sort when you need a stable sort — Quick Sort is not stable. Sorting a list of transactions by amount when they're already ordered by timestamp will scramble transactions with the same amount. This is a silent bug — no exception is thrown, but the output is wrong. Fix: Use Collections.sort() or Arrays.sort() on object arrays in Java (both use Tim Sort, which is stable), or explicitly choose Merge Sort when element order among equals matters.

Interview Questions on This Topic

  • QWhat is the worst-case time complexity of Quick Sort and under what real input conditions does it occur? How do production implementations prevent it?
  • QQuick Sort is not a stable sort. What does that mean in practice, and can you give a concrete example where using Quick Sort instead of a stable sort would produce a silently incorrect result?
  • QIf you were implementing Quick Sort on a linked list instead of an array, what changes would you make and why? (Hint: think about what makes Quick Sort fast on arrays and whether those properties still apply.)

Frequently Asked Questions

Is Quick Sort faster than Merge Sort?

In practice, yes — for most in-memory sorting on primitive types. Quick Sort has better cache locality and requires no extra memory allocation, so its constant factors are smaller even though both algorithms are O(n log n) average case. However, Merge Sort has a guaranteed O(n log n) worst case and is stable, making it the better choice when either of those properties matters.

Does Java use Quick Sort for Arrays.sort()?

Java uses Dual-Pivot Quick Sort for primitive arrays (int[], double[], etc.) — a highly optimised variant introduced in Java 7 that uses two pivots to create three partitions. For object arrays (Integer[], String[], etc.) and Collections.sort(), Java uses Tim Sort, a hybrid of Merge Sort and Insertion Sort that is stable and adaptive to partially-sorted data.

Why is Quick Sort's worst case O(n²) if it's supposed to be O(n log n)?

The O(n log n) figure is the average case, not the guaranteed case. Worst case happens when every pivot choice is either the minimum or maximum of the remaining sub-array — producing maximally unbalanced splits (sizes 0 and n-1) at every step. This means n recursive levels instead of log n, giving O(n) work at each level and O(n²) total. The fix is smarter pivot selection: median-of-three, random pivot, or the Dual-Pivot strategy used in Java's standard library.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousMerge SortNext →Heap Sort
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged