Home DSA Jump Search Algorithm — Block-Based Searching in O(√n)

Jump Search Algorithm — Block-Based Searching in O(√n)

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Searching → Topic 5 of 8
Learn the Jump Search algorithm.
🧑‍💻 Beginner-friendly — no prior DSA experience needed
In this tutorial, you'll learn:
  • Optimal jump size is √n — minimises forward_jumps(n/k) + backward_scan(k). Setting derivative to zero gives k = √n, total 2√n steps.
  • O(√n) — between O(n) linear and O(log n) binary. Only preferable over binary search when backward movement is expensive (sequential storage, tape, certain streams).
  • For standard RAM arrays: always use binary search. Jump search's advantage is real only when your storage model penalises backward seeks.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚡ Quick Answer
Linear search checks every element — O(n). Binary search jumps to the middle every time — O(log n). Jump search is the middle ground: jump ahead by fixed blocks of √n, then do linear search within the block where the target was found. It's faster than linear but simpler than binary, and works well when jumping backwards (seeking) is expensive.

Jump search is primarily of historical and theoretical interest today. In the era of magnetic tape and sequential storage devices, 'jumping forward' was cheap but 'seeking backward' was expensive — requiring physical rewinding. Jump search's forward-only movement was a genuine practical advantage.

For in-memory arrays with random access in 2026, binary search is almost always better. But the algorithm demonstrates an important principle: match the data structure's access cost model to your algorithm. O(√n) is the optimal solution when forward-jumps are O(1) but backward-seeks are O(n). Understanding the access cost model is a skill that transfers to B-tree design, external merge sort, and streaming algorithms.

The Optimal Jump Size

Why √n? If we use jump size k: - Number of jumps to find the block: n/k - Linear scan within the block: k - Total: n/k + k

Minimise n/k + k: derivative with respect to k → -n/k² + 1 = 0 → k = √n

At k=√n: √n + √n = 2√n jumps total → O(√n).

jump_search.py · PYTHON
12345678910111213141516171819202122
import math

def jump_search(arr: list, target) -> int:
    n = len(arr)
    step = int(math.sqrt(n))
    prev = 0
    # Jump until arr[min(step,n)-1] >= target
    while arr[min(step, n) - 1] < target:
        prev = step
        step += int(math.sqrt(n))
        if prev >= n:
            return -1
    # Linear search in block [prev, min(step, n))
    while arr[prev] < target:
        prev += 1
        if prev == min(step, n):
            return -1
    return prev if arr[prev] == target else -1

arr = [0,1,1,2,3,5,8,13,21,34,55,89,144,233,377]
print(jump_search(arr, 55))   # 10
print(jump_search(arr, 100))  # -1
▶ Output
10
-1

Comparison with Other Searches

🔥
When to use Jump SearchJump search is useful when: (1) backward movement is expensive (sequential media), (2) array is sorted and uniformly distributed, (3) you need something simpler than binary search. For random access arrays, binary search is almost always better.

Complexity Analysis

Time: O(√n) — n/k forward jumps + k linear scan, minimised at k=√n Space: O(1) — no extra space Prerequisite: Array must be sorted

Worst case: Target is just before the last jump → √n jumps + √n linear steps = 2√n

AlgorithmTimeSpaceRequires SortNotes
Linear SearchO(n)O(1)NoWorks on any array
Jump SearchO(√n)O(1)YesForward-only movement
Binary SearchO(log n)O(1)YesOptimal for sorted arrays
InterpolationO(log log n) avgO(1)YesUniform distribution only

🎯 Key Takeaways

  • Optimal jump size is √n — minimises forward_jumps(n/k) + backward_scan(k). Setting derivative to zero gives k = √n, total 2√n steps.
  • O(√n) — between O(n) linear and O(log n) binary. Only preferable over binary search when backward movement is expensive (sequential storage, tape, certain streams).
  • For standard RAM arrays: always use binary search. Jump search's advantage is real only when your storage model penalises backward seeks.
  • The derivation of √n as optimal jump size is a useful template: many 'two-phase' algorithms have an optimal breakpoint found by equating the two phase costs.
  • Know this for interviews — it demonstrates understanding that algorithm choice depends on the data structure, not just asymptotic complexity.

Interview Questions on This Topic

  • QWhy is √n the optimal jump size for jump search?
  • QWhen would you prefer jump search over binary search?
  • QWhat is the worst-case scenario for jump search?
  • QDerive the time complexity of jump search from first principles.

Frequently Asked Questions

Is jump search ever used in practice?

Rarely for in-memory arrays — binary search is faster and equally simple. Jump search has historical relevance for external storage (magnetic tape) where forward seeks are free but backward seeks are expensive.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousTernary Search AlgorithmNext →Interpolation Search Algorithm
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged