Jump Search Algorithm — Block-Based Searching in O(√n)
- Optimal jump size is √n — minimises forward_jumps(n/k) + backward_scan(k). Setting derivative to zero gives k = √n, total 2√n steps.
- O(√n) — between O(n) linear and O(log n) binary. Only preferable over binary search when backward movement is expensive (sequential storage, tape, certain streams).
- For standard RAM arrays: always use binary search. Jump search's advantage is real only when your storage model penalises backward seeks.
Jump search is primarily of historical and theoretical interest today. In the era of magnetic tape and sequential storage devices, 'jumping forward' was cheap but 'seeking backward' was expensive — requiring physical rewinding. Jump search's forward-only movement was a genuine practical advantage.
For in-memory arrays with random access in 2026, binary search is almost always better. But the algorithm demonstrates an important principle: match the data structure's access cost model to your algorithm. O(√n) is the optimal solution when forward-jumps are O(1) but backward-seeks are O(n). Understanding the access cost model is a skill that transfers to B-tree design, external merge sort, and streaming algorithms.
The Optimal Jump Size
Why √n? If we use jump size k: - Number of jumps to find the block: n/k - Linear scan within the block: k - Total: n/k + k
Minimise n/k + k: derivative with respect to k → -n/k² + 1 = 0 → k = √n
At k=√n: √n + √n = 2√n jumps total → O(√n).
import math def jump_search(arr: list, target) -> int: n = len(arr) step = int(math.sqrt(n)) prev = 0 # Jump until arr[min(step,n)-1] >= target while arr[min(step, n) - 1] < target: prev = step step += int(math.sqrt(n)) if prev >= n: return -1 # Linear search in block [prev, min(step, n)) while arr[prev] < target: prev += 1 if prev == min(step, n): return -1 return prev if arr[prev] == target else -1 arr = [0,1,1,2,3,5,8,13,21,34,55,89,144,233,377] print(jump_search(arr, 55)) # 10 print(jump_search(arr, 100)) # -1
-1
Comparison with Other Searches
Complexity Analysis
Time: O(√n) — n/k forward jumps + k linear scan, minimised at k=√n Space: O(1) — no extra space Prerequisite: Array must be sorted
Worst case: Target is just before the last jump → √n jumps + √n linear steps = 2√n
| Algorithm | Time | Space | Requires Sort | Notes |
|---|---|---|---|---|
| Linear Search | O(n) | O(1) | No | Works on any array |
| Jump Search | O(√n) | O(1) | Yes | Forward-only movement |
| Binary Search | O(log n) | O(1) | Yes | Optimal for sorted arrays |
| Interpolation | O(log log n) avg | O(1) | Yes | Uniform distribution only |
🎯 Key Takeaways
- Optimal jump size is √n — minimises forward_jumps(n/k) + backward_scan(k). Setting derivative to zero gives k = √n, total 2√n steps.
- O(√n) — between O(n) linear and O(log n) binary. Only preferable over binary search when backward movement is expensive (sequential storage, tape, certain streams).
- For standard RAM arrays: always use binary search. Jump search's advantage is real only when your storage model penalises backward seeks.
- The derivation of √n as optimal jump size is a useful template: many 'two-phase' algorithms have an optimal breakpoint found by equating the two phase costs.
- Know this for interviews — it demonstrates understanding that algorithm choice depends on the data structure, not just asymptotic complexity.
Interview Questions on This Topic
- QWhy is √n the optimal jump size for jump search?
- QWhen would you prefer jump search over binary search?
- QWhat is the worst-case scenario for jump search?
- QDerive the time complexity of jump search from first principles.
Frequently Asked Questions
Is jump search ever used in practice?
Rarely for in-memory arrays — binary search is faster and equally simple. Jump search has historical relevance for external storage (magnetic tape) where forward seeks are free but backward seeks are expensive.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.