Senior 3 min · March 30, 2026

Bubble Sort Time Complexity — Why 50K Records Crawls

Naive bubble sort runs 2.

N
Naren · Founder
Plain-English first. Then code. Then the interview question.
About
 ● Production Incident 🔎 Debug Guide
Quick Answer
  • Bubble sort: O(n²) average/worst, O(n) best with early exit
  • O(n²) comes from n passes × n/2 comparisons each
  • Optimisation: swapped flag lets it exit after one pass if already sorted
  • Practical limit: ~10,000 elements before slowdown becomes noticeable
  • Production reality: Python's sorted() beats it by 1000x at n=100,000
Plain-English First

Bubble sort compares adjacent pairs and swaps them if they're out of order, making multiple passes until nothing swaps. Heavy elements sink, light ones rise. The time complexity is O(n²) because n passes × n comparisons per pass. The smart optimisation: if a complete pass has no swaps, the array is sorted — stop. This makes the best case (already sorted) O(n) instead of O(n²).

Bubble sort appears in every algorithms course and almost no production codebase. Its value is pedagogical: it's the first algorithm that makes O(n²) complexity intuitive. The optimised variant with early exit is worth implementing cleanly because it demonstrates the most fundamental algorithm optimisation — detect the done condition early and stop.

Bubble Sort: Naive and Optimised Implementations

Bubble sort exists in two flavours. The naive version always does n² comparisons, even if the array is already sorted. That's wasteful. The optimised version adds a swapped flag — if a full pass completes without any swaps, the array is sorted, and you can stop early. This is the difference between O(n²) best case and O(n). The optimisation costs one boolean assignment per pass — negligible overhead for the potential gain. You'd be surprised how often this simple trick is omitted in student implementations. In production code, you'd never write this, but the lesson of detecting a stable state and exiting applies everywhere — from HTTP polling loops to data sync processes.

bubble_sort.pyPYTHON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
from typing import List

def bubble_sort_naive(arr: List[int]) -> List[int]:
    """
    Naive bubble sort — always does n² comparisons.
    Time:  O(n²) all cases
    Space: O(1)
    """
    arr = arr.copy()
    n = len(arr)
    for i in range(n):
        for j in range(n - 1):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
    return arr


def bubble_sort_optimised(arr: List[int]) -> List[int]:
    """
    Optimised bubble sort with two improvements:
    1. After pass i, the last i elements are in place — skip them.
    2. If no swap in a full pass, array is sorted — exit early.

    Time:  O(n) best case (already sorted)
           O(n²) average and worst case (random / reverse sorted)
    Space: O(1)
    """
    arr = arr.copy()
    n = len(arr)
    for i in range(n):
        swapped = False
        for j in range(0, n - i - 1):   # Each pass is shorter
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
                swapped = True
        if not swapped:   # No swaps → already sorted → done
            break
    return arr


# Already sorted — O(n): only 1 pass, exits immediately
print(bubble_sort_optimised([1, 2, 3, 4, 5]))
# [1, 2, 3, 4, 5]  — 1 pass

# Random — O(n²): multiple passes needed
print(bubble_sort_optimised([64, 34, 25, 12, 22, 11, 90]))
# [11, 12, 22, 25, 34, 64, 90]

# Reverse sorted — worst case O(n²): maximum swaps every pass
print(bubble_sort_optimised([5, 4, 3, 2, 1]))
# [1, 2, 3, 4, 5]  — all n passes required
Output
[1, 2, 3, 4, 5]
[11, 12, 22, 25, 34, 64, 90]
[1, 2, 3, 4, 5]
# Comparison of approaches:
# Input: [1,2,3,4,5] (already sorted, n=5)
# Naive optimised: n*(n-1)/2 = 10 comparisons
# Early-exit optimised: n-1 = 4 comparisons (stops after 1 pass)
Mental Model: The Early Exit Pattern
  • The swapped flag is the only way to know if work is still useful — without it, you always do full work.
  • Early exit is a trade-off: you pay a tiny boolean check each pass to potentially skip all remaining passes.
  • In bubble sort, the payoff is enormous for nearly sorted data — which is common in real-world datasets (e.g., maintaining a sorted list with a few new insertions).
Production Insight
If you're sorting in production and see a custom sort, it's almost always a mistake.
Python's sorted() uses Timsort — O(n log n) worst-case, O(n) best-case on nearly sorted data.
The only valid reason to implement bubble sort is teaching or embedded systems with extreme memory constraints where no library sort exists.
Key Takeaway
Optimised bubble sort costs one boolean per pass.
Without it, best-case is O(n²). With it, O(n).
That's the cheapest performance gain you'll ever get.
When to Use Which Implementation
IfYou're writing code for production use
UseUse the language's built-in sort. Do not implement bubble sort.
IfYou're teaching or learning algorithm fundamentals
UseImplement both naive and optimised to see the difference in action.
IfYou're on a microcontroller with no dynamic allocation and < 256 bytes of RAM
UseOptimised bubble sort with in-place swap might work — but only if n < 50.

Why O(n²) Matters: The Performance Cliff

O(n²) isn't just a theoretical classification. It's a performance cliff that hits hard as n grows. For n=10, bubble sort does ~100 comparisons. For n=1,000, that's 500,000. For n=100,000, it's 5 billion. A comparison is fast — but 5 billion of them, even at 1 nanosecond each, takes 5 seconds. Real-world datasets often have millions of items. That's where O(n log n) saves you: n log₂n for n=1 million is ~20 million comparisons, not 500 billion. The ratio is 25,000x. That's the difference between a job finishing in 0.1 seconds and taking 40 minutes. This is why bubble sort is a textbook example — and textbook examples stay out of production.

Production Insight
Spotting O(n²) in production isn't hard: benchmark runtime against input size.
If doubling the input quadruples the runtime, you've found a quadratic algorithm.
Profile with tools like cProfile, pprof, or async-profiler to pinpoint the exact function.
Key Takeaway
O(n²) means work grows as the square of input size.
At n=10, it's fine. At n=10,000, it's a crisis.
Always benchmark at production-scale data, not toy examples.

Bubble Sort vs Insertion Sort: The Senior Engineer's Choice

Senior engineers don't just know that bubble sort is slow — they know which quadratic algorithm to use if they're forced into a small-n situation. Insertion sort, despite also being O(n²) worst-case, consistently beats bubble sort in practice. It does fewer swaps (shifting vs swapping), works better on nearly sorted data (O(n) best-case), and is stable. For small arrays (n < 50), insertion sort can even outperform quicksort due to lower overhead. That's why Python's Timsort uses insertion sort for small runs. Bubble sort has no such advantage. It's always the worst of the quadratic sorts. The only case where bubble sort can theoretically win is when swapping two elements is extremely cheap (e.g., a tiny struct that fits in a register) — but that's academic.

Real Benchmark
On a modern CPU, sorting 10,000 random integers in Python: bubble sort (optimised) takes ~2.5 seconds, insertion sort takes ~0.8 seconds, Timsort takes ~0.001 seconds. The quadratic overhead dominates.
Production Insight
If you must hand-roll a sort for a small array (e.g., sorting 5 items in an embedded system), use insertion sort.
Never use bubble sort. It's measurably slower on real hardware due to its swap-heavy pattern.
Timsort is the gold standard — it's adaptive, stable, and handles real-world patterns (partially sorted data) efficiently.
Key Takeaway
Among O(n²) sorts, insertion sort is almost always better than bubble sort.
Bubble sort exists only as a teaching tool.
In production, Timsort beats everything else for general-purpose sorting.

The Stability Trade-off: When Bubble Sort Beats Quick Sort

Bubble sortis stable — equal elements maintain their relative order. That's a property you sometimes need: sorting by name, then by date, requires a stable second sort. Quick sort, in its classic form, is not stable. But here's the catch: you don't have to choose between stability and speed. Timsort and merge sort are both stable and O(n log n). Stability alone is not a reason to pick bubble sort. If you need a stable sort, use the standard library's stable sort (e.g., Python's sorted(), Java's Collections.sort() which uses Timsort, or C++'s std::stable_sort). Bubble sort's stability is a nice trivia fact, not a production advantage.

Production Insight
Stability matters in data pipelines where order of equal keys must be preserved after multiple sort passes.
But modern stable sorts (Timsort) handle this at O(n log n).
If you see bubble sort justified by stability, challenge it — there's almost always a better alternative.
Key Takeaway
Stability is a property, not a feature.
Bubble sort is stable, but so are Timsort and merge sort — and they're orders of magnitude faster.
Never sacrifice performance for stability when the fast alternative is also stable.

When You Might Actually See Bubble Sort in Production (and How to Fix It)

Bubble sort doesn't belong in production — yet it sneaks in. Common paths: a junior dev implementing a coding challenge solution directly into the codebase, a refactoring that missed a custom sort, or legacy code from before the language added a stable sort (think early Java versions). The fix is always the same: replace with the language's built-in sort. If you can't because of an unusual comparison rule, at least replace with insertion sort. Also add a linter rule: no nested loops with adjacent swaps. Some teams even run performance regression tests with a 'slow sort detector' that alerts if a sorted output takes more than 10x a known Timsort baseline on a fixed small dataset.

Production Insight
Code reviews catch most manual sort implementations — but not all.
Static analysis can detect patterns like for i in range(n): for j in range(n): if arr[j] > arr[j+1]: and flag them.
Set a CI job that times sorting a 10,000-element array with your code and the built-in sort — if it's > 2x slower, fail the build.
Key Takeaway
Bubble sort in production is a known smell.
Catch it with linters, code reviews, and performance baselines.
Replace with built-in sorts: they're faster, safer, and already debugged.
● Production incidentPOST-MORTEMseverity: high

The Naive Bubble Sort That Slowed a Data Pipeline by 30 Minutes

Symptom
A data pipeline that usually completed in under 2 minutes started taking 45+ minutes. CPU was pinned at 100% on a single core for the entire duration.
Assumption
The team assumed the bottleneck was network I/O to the database because the job involved sorting a moderate dataset after a join.
Root cause
A developer had written a custom bubble sort (without early exit) to sort 50,000 records. Naive bubble sort does O(n²) comparisons: 2.5 billion comparisons for n=50,000. Python's Timsort would take ~0.1 seconds.
Fix
Replaced the custom sort with sorted() (Timsort). Runtime dropped to 1.8 seconds. Added a linting rule to ban manual sorting implementations for production code.
Key lesson
  • Never write your own sort in production — built-in sorts are faster, more tested, and have O(n log n) worst-case.
  • If you must custom-sort (rare), always use the optimised version with early exit and a swap tracking flag.
  • Set performance tests with realistic data sizes to catch O(n²) algorithms before they hit production.
Production debug guideHow to identify that an O(n²) sort is ruining your performance3 entries
Symptom · 01
Single-core CPU at 100% for sustained period; job time scales quadratically with input size.
Fix
Profile the job with Python's cProfile or Java's async-profiler. Look for a custom sort function at the top of the CPU flame graph.
Symptom · 02
Pipeline slows dramatically as data volume grows (e.g., 1000 items → 1s, 10,000 → 100s).
Fix
Plot runtime vs. input size on a log-log scale. A slope of ~2 confirms O(n²). Then inspect the sorting code.
Symptom · 03
Database query is fast but overall job is slow; logs show heavy sorting in application layer.
Fix
Check for any sort() implementation that does not delegate to the standard library. grep for for i in range, for j in range, nested loops with conditional swaps.
Bubble Sort Complexity & Properties
CaseTimeWhen It Occurs
Best case (optimised)O(n)Array already sorted — early exit after 1 pass
Best case (naive)O(n²)No early exit regardless of input
Average caseO(n²)Random order
Worst caseO(n²)Reverse sorted — maximum swaps every pass
Space complexityO(1)Sorts in-place — no auxiliary array
Stable?YesEqual elements are never swapped

Key takeaways

1
Bubble sort
O(n²) worst and average, O(n) best case (optimised version only). O(1) space — sorts in-place.
2
The early-exit optimisation (swapped flag) is what enables O(n) best case. Without it, best case is also O(n²).
3
Bubble sort is stable
equal elements maintain their relative order — but so are Timsort and merge sort, which are also O(n log n).
4
Never use bubble sort on more than a few hundred elements in production. Language built-ins are always faster.
5
Insertion sort consistently outperforms bubble sort among quadratic sorts
know the difference.
6
Spot O(n²) in production by checking runtime vs input size
double input → quadruple runtime.

Common mistakes to avoid

4 patterns
×

Using the naive O(n²) best-case implementation when the optimised version is the same code complexity

Symptom
The algorithm always runs O(n²) even on already sorted data — no performance gain from early exit.
Fix
Add a swapped boolean flag inside the outer loop, set it to false before each pass, and break out of the outer loop if it remains false after the inner loop completes.
×

Applying bubble sort to large inputs (e.g., >10,000 elements)

Symptom
Runtime grows quadratically — a 100,000-element sort takes billions of comparisons, typically minutes instead of milliseconds.
Fix
Replace with Python's sorted() (Timsort, O(n log n)), Java's Arrays.sort() (dual-pivot quicksort or Timsort), or the appropriate built-in sort for your language.
×

Thinking bubble sort's stability makes it a viable choice for production sorting

Symptom
Developers justify using bubble sort because it's stable, ignoring that Timsort and merge sort are also stable and O(n log n).
Fix
Use the language's stable sort (e.g., Python's sorted(), Java's Collections.sort(), C++ std::stable_sort). Stability alone does not justify O(n²).
×

Confusing bubble sort with insertion sort in terms of performance

Symptom
Assuming bubble sort is 'good enough' because insertion sort also has O(n²) worst case.
Fix
Benchmark both on realistic data. Insertion sort consistently outperforms bubble sort by 2-4x on average due to fewer swap operations. Avoid bubble sort entirely.
INTERVIEW PREP · PRACTICE MODE

Interview Questions on This Topic

Q01JUNIOR
What is bubble sort's time complexity in best, average, and worst case?
Q02SENIOR
How does the early-exit optimisation change bubble sort's best-case comp...
Q03SENIOR
Compare bubble sort to insertion sort — when would you prefer one over t...
Q04SENIOR
Can bubble sort be used in production code? Explain.
Q01 of 04JUNIOR

What is bubble sort's time complexity in best, average, and worst case?

ANSWER
Best case: O(n) with optimised version (early exit), O(n²) without. Average and worst case: O(n²) regardless. Space: O(1) in-place.
FAQ · 4 QUESTIONS

Frequently Asked Questions

01
What is the time complexity of bubble sort?
02
Is bubble sort stable?
03
Why is bubble sort considered bad for production?
04
Can I make bubble sort faster by parallelising it?
🔥

That's Complexity Analysis. Mark it forged?

3 min read · try the examples if you haven't

Previous
Space Complexity: How to Analyze Memory Usage in Algorithms
6 / 6 · Complexity Analysis
Next
Introduction to Recursion