Skip to content
Home DSA Bisection Method — Midpoint Overflow at 1e308 Boundaries

Bisection Method — Midpoint Overflow at 1e308 Boundaries

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Numerical Analysis → Topic 2 of 3
Using (a+b)/2 overflows to Infinity when bracketing values near 1e308.
🧑‍💻 Beginner-friendly — no prior DSA experience needed
In this tutorial, you'll learn
Using (a+b)/2 overflows to Infinity when bracketing values near 1e308.
  • Bisection: Intermediate Value Theorem guarantees a root when f(a)×f(b)<0.
  • Linear convergence: one correct binary digit per iteration — ~34 iterations for 10^-10 precision.
  • No derivative required — works for any continuous function.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Bisection halves the search interval each step using sign change (IVT)
  • Requires f(a) * f(b) < 0 — no bracket, no guarantee
  • Linear convergence: 1 bit of precision per iteration (~34 steps for 10^-10)
  • Floating-point midpoint can drift — use (a+b)/2 not a+b/2
  • Production insight: scipy.optimize.brentq is the real workhorse
  • Biggest mistake: assuming it finds all roots (it finds only one per bracket)
Production Incident

Midpoint Overflow and Sign Error in Embedded Systems

A flight controller used bisection to compute inverse kinematics. A 64-bit interval caused midpoint overflow, flipping the sign of f(mid) and selecting the wrong half.
SymptomRoot converged to a value outside the expected range, causing unstable control outputs.
AssumptionThe engineers assumed (a+b)/2 always lands inside [a,b] for any floating-point values.
Root causeWhen a and b are both large (~1e308) and opposite signs, a+b overflows to Infinity, and (a+b)/2 = Infinity, breaking the bracket logic.
FixCompute midpoint as a + (b-a)/2 to avoid overflow. Also added a sign-strictness check before each iteration.
Key Lesson
Midpoint computation: always use a + (b-a)/2 when interval width may be large.Never assume integer arithmetic rules hold in floating-point.Validate the bracket at the start — f(a)*f(b) > 0 means no root guaranteed.
Production Debug Guide

Symptom → Action for when bisection doesn't behave as expected

Converges to a value where f(x) is not near zeroCheck the bracket: f(a)*f(b) < 0 must hold. If sign is opposite but function is discontinuous, IVT doesn't apply. Plot the function to verify continuity.
Iterations exceed max_iter without convergingCompute required iterations: n >= log2((b-a)/tol). If that matches, maybe tolerance is too tight. If not, check for floating-point issues like repeated midpoint due to rounding.
Bisection terminates early with a 'root' far from true rootCheck if f(mid)==0 check is actually detecting a zero crossing. Floating-point equality is rare. Use abs(f(mid)) < epsilon instead.
Performance is too slow for real-time systemsUse a hybrid approach: run bisection for ~20 iterations to narrow interval, then switch to Newton-Raphson or secant method. Or use Brent's method directly.

Bisection is the simplest application of the Intermediate Value Theorem: if f is continuous and f(a) < 0 and f(b) > 0, then somewhere in [a,b] the function crosses zero. The algorithm just applies this fact repeatedly, halving the interval each time.

The guaranteed convergence of bisection makes it invaluable as a safety net. Bisection cannot diverge, cannot oscillate, cannot fail as long as the initial bracket is valid. This is why Brent's method — the gold standard for numerical root finding, used in scipy.optimize.brentq — combines bisection as a fallback with Newton-Raphson's quadratic convergence: the speed of Newton when near the root, the reliability of bisection everywhere else.

Algorithm and Implementation

Invariant: f(a) and f(b) have opposite signs → root in [a,b]. Each step halves the interval.

There's a subtle trap in the midpoint calculation. The naive (a+b)/2 can overflow when a and b are large and opposite signs. Production code uses a + (b-a)/2, which is safe.

The termination check uses abs(b-a)/2 < tol, not abs(f(mid)) < tol — that's a common source of early false convergence. The interval width is the true error bound.

bisection.py · PYTHON
123456789101112131415161718192021
def bisection(f, a: float, b: float, tol: float = 1e-10, max_iter: int = 100) -> float:
    """
    Find root of f in [a,b] where f(a)*f(b) < 0.
    Guaranteed to converge for continuous functions.
    """
    if f(a) * f(b) > 0:
        raise ValueError('f(a) and f(b) must have opposite signs')
    for i in range(max_iter):
        mid = a + (b - a) / 2          # overflow-safe midpoint
        if (b - a) / 2 < tol:
            print(f'Converged in {i+1} iterations, root ≈ {mid}')
            return mid
        if f(a) * f(mid) < 0:
            b = mid
        else:
            a = mid
    return a + (b - a) / 2

import math
root = bisection(lambda x: x**2 - 2, 1, 2)
print(f'sqrt(2) = {root}')
▶ Output
Converged in 34 iterations, root ≈ 1.4142135623842478
sqrt(2) = 1.4142135623842478
📊 Production Insight
Overflow in midpoint calculation crashes computations on hardware with limited floating-point range.
Use a + (b-a)/2 to avoid (a+b) overflow.
Terminate on interval width, not function value — f(x) near zero doesn't guarantee small error.
🎯 Key Takeaway
Bisection is simple but the midpoint formula and termination condition matter.
Always validate sign change before starting.
Interval width is the only reliable error bound.

Error Analysis and Convergence

After n iterations, the interval width is (b-a)/2^n. The error in the root position is at most this width.

To achieve precision ε: n ≥ log2((b-a)/ε) iterations.

For [1,2] with ε=10^-10: n ≥ log2(1/10^-10) = 10×log2(10) ≈ 33.2 → 34 iterations.

This is linear convergence — one binary digit per iteration. Compare to Newton-Raphson which doubles correct digits each step. But bisection never fails to get those digits, Newton sometimes doesn't converged at all.

bisection_error.py · PYTHON
123456789101112131415
import math

def bisection_with_trace(f, a, b, n_steps=10):
    fa = f(a)
    print(f'Step  a              b              mid            f(mid)')
    for i in range(n_steps):
        mid = a + (b - a) / 2
        fmid = f(mid)
        print(f'{i+1:4}  {a:.10f}  {b:.10f}  {mid:.10f}  {fmid:+.2e}')
        if fa * fmid < 0:
            b, _ = mid, fmid
        else:
            a, fa = mid, fmid

bisection_with_trace(lambda x: x**2 - 2, 1.0, 2.0, 8)
▶ Output
Step a b mid f(mid)
1 1.0000000000 2.0000000000 1.5000000000 +2.50e-01
2 1.0000000000 1.5000000000 1.2500000000 -4.38e-01
3 1.2500000000 1.5000000000 1.3750000000 -1.09e-01
4 1.3750000000 1.5000000000 1.4375000000 +6.64e-02
5 1.3750000000 1.4375000000 1.4062500000 -2.34e-02
6 1.4062500000 1.4375000000 1.4218750000 +2.10e-02
7 1.4062500000 1.4218750000 1.4140625000 -2.39e-04
8 1.4140625000 1.4218750000 1.4179687500 +1.04e-02
📊 Production Insight
Linear convergence means 30+ iterations for 10^-10 precision — fine for offline use, deadly for real-time control.
Floating-point rounding can cause infinite loop when interval width reaches machine epsilon.
Add a hard iteration limit to protect against pathological cases.
🎯 Key Takeaway
Error halves every iteration — predictable and controllable.
For high precision, bisection is slow but reliable.
Always cap max_iter to avoid silent hangs.

Bisection vs Newton-Raphson

PropertyBisectionNewton-Raphson
ConvergenceLinear (1 bit/step)Quadratic (doubles digits)
Iterations for 10^-10~34~5
Requires derivativeNoYes
Guaranteed convergenceYes (given bracket)No
Initial requirementBracket [a,b]Single guess x0

Best practice: Use bisection to get close (10^-3 precision), then switch to Newton-Raphson for fast final convergence. This is essentially Brent's method.

But there's a hidden cost: Newton-Raphson requires computing the derivative. For functions that are expensive to evaluate or have no analytic derivative, bisection remains the only option.

🔥scipy.optimize
scipy.optimize.brentq implements Brent's method — bisection + Newton-Raphson hybrid. It is guaranteed to converge and fast. Always prefer it over implementing bisection manually in production Python code.
📊 Production Insight
Newton fails near saddle points or when derivative is zero — bisection never does.
The hybrid approach (Brent) dominates production: reliability of bisection, speed of Newton.
If you can't compute derivatives (black-box functions), bisection + secant method works well.
🎯 Key Takeaway
Bisection is the safety net; Newton is the speedboat.
Brent's method is the production go-to.
No derivative? Stick with bisection or secant.

When Bisection Fails: Invalid Brackets and Discontinuities

Bisection's guarantee depends on two assumptions: 1. f(a) and f(b) have opposite signs 2. f is continuous on [a,b]

If the function has a vertical asymptote (like 1/x at 0), a sign change doesn't guarantee a root — you'll converge to a singularity. Similarly, if there are multiple roots inside the interval, bisection will only find one (the one that triggers the sign change at each step).

Another failure mode: the bracket is valid but the function is flat at the root, causing f(mid) to be exactly zero prematurely. Your abs(b-a)/2 < tol check saves you here, but if you mistakenly use abs(f(mid)) < tol you might converge to a point where f is tiny but the interval is still large.

📊 Production Insight
Always plot the function before running bisection in production — catch discontinuities early.
For functions with vertical asymptotes, use f(x) = 1/(x - r) transformation to convert root-finding into asymptote detection.
Multiple roots require divide-and-conquer: split interval and search each subinterval.
🎯 Key Takeaway
Bisection doesn't work for discontinuous functions, even with a sign change.
Verify continuity: if you can't plot, at least check f is finite at both endpoints.
One bracket, one root — for multiple roots you need multiple brackets.

Brent's Method: The Production-Grade Hybrid

Brent's method combines bisection's reliability with Newton-Raphson's speed. It uses inverse quadratic interpolation when possible, but falls back to bisection when interpolation would go out of bounds or converge too slowly.

The result: guaranteed convergence, and typically converges superlinearly (faster than bisection, slower than pure Newton). In practice it's the default choice for one-dimensional root-finding in scipy, MATLAB's fzero, and many other numerical libraries.

Implementation complexity is higher than plain bisection, but you don't write it yourself — you use a library. Understanding the internals helps when debug convergence issues.

brent_example.py · PYTHON
12345678910
from scipy.optimize import brentq
import math

# Use Brent's method (no manual bisection needed)
root = brentq(lambda x: x**2 - 2, 1, 2)
print(f'sqrt(2) = {root:.15f}')

# Even works near singularities if the bracket avoids them
root2 = brentq(lambda x: 1/x - 1, 0.5, 2.0)
print(f'1/x - 1 = 0 at x = {root2}')
▶ Output
sqrt(2) = 1.414213562373095
1/x - 1 = 0 at x = 1.0
📊 Production Insight
scipy.optimize.brentq is the gold standard for 1D root-finding in Python.
In C++, boost.math tools::brent_find_minima does the same.
Don't implement bisection manually unless you have a good reason (embedded systems without library support).
Even in embedded, consider using Brent if you have the ROM budget.
🎯 Key Takeaway
Brent is always better than pure bisection for production.
Use library implementations — they handle edge cases you haven't thought of.
If you must implement, hybridise: 20 iterations of bisection then switch to Newton.
🗂 Root-Finding Methods Compared
Bisection, Newton-Raphson, Secant, Brent
PropertyBisectionNewton-RaphsonSecantBrent
Convergence rateLinear (1 bit/step)QuadraticSuperlinear (~1.618)Superlinear
Guaranteed convergenceYes (with valid bracket)NoNoYes
Derivative requiredNoYes (analytic)No (numerical approx)No
Start guessInterval [a,b]Single point x0Two pointsInterval [a,b]
Typical real-world useSafety net, educationFast near rootNo derivative caseProduction default

🎯 Key Takeaways

  • Bisection: Intermediate Value Theorem guarantees a root when f(a)×f(b)<0.
  • Linear convergence: one correct binary digit per iteration — ~34 iterations for 10^-10 precision.
  • No derivative required — works for any continuous function.
  • Error after n steps: (b-a)/2^n — predictable and controllable.
  • Brent's method (scipy.optimize.brentq) combines bisection reliability with Newton speed.
  • Always validate the bracket and check for discontinuities before running.

⚠ Common Mistakes to Avoid

    Using f(mid) == 0 as termination condition
    Symptom

    Bisection takes 40+ iterations to find a root that doesn't hit floating-point zero exactly. Often misses the root entirely when function is never exactly zero.

    Fix

    Terminate on interval width: (b - a) / 2 < tol. Optionally add a check on abs(f(mid)) < epsilon, but interval width is the reliable bound.

    Forgetting to validate the initial bracket
    Symptom

    Bisection returns a value outside the expected range or fails to converge. The algorithm assumes f(a)*f(b) < 0, but if that's not true, the behaviour is undefined.

    Fix

    Always check if f(a)*f(b) > 0 and raise a clear error. Also verify f is continuous if possible (e.g., evaluate f at points between a and b to catch asymptotes).

    Applying bisection to find multiple roots simultaneously
    Symptom

    Only one root is found even though there are multiple roots in the interval. The algorithm converges to whichever root's sign change happens first.

    Fix

    Divide the interval into subintervals, each containing at most one root. Use scanning (evaluation at many points) to isolate root brackets before running bisection.

Interview Questions on This Topic

  • QHow many bisection steps are needed to find a root to within 10^-6 on the interval [0, 1]?JuniorReveal
    n = ceil(log2((1-0)/1e-6)) = ceil(log2(1e6)) ≈ ceil(19.93) = 20 iterations. The error after n steps is exactly (b-a)/2^n, so for 10^-6 we need n such that 1/2^n ≤ 1e-6 → 2^n ≥ 1e6 → n ≥ log2(1e6) ≈ 19.93 → 20.
  • QWhat is the Intermediate Value Theorem and why does it guarantee bisection converges?Mid-levelReveal
    IVT: If f is continuous on [a,b] and y is between f(a) and f(b), there exists c in [a,b] such that f(c)=y. For bisection, we take y=0. Since f(a) and f(b) have opposite signs, 0 lies between them. The algorithm maintains the invariant that the root is in the current interval, and the interval width halves each step, so it converges to the root. The continuity ensures there actually is a c where f(c)=0.
  • QWhen would you choose bisection over Newton-Raphson?SeniorReveal
    Choose bisection when: (1) the function is expensive to compute and you can't derive the derivative analytically, (2) you need guaranteed convergence (e.g., safety-critical systems), (3) the initial guess for Newton might be far from the root causing divergence, (4) the function has a flat gradient near the root making Newton unstable, or (5) you only need a few digits of precision (bisection's linear convergence is acceptable). In most production code, you'd use Brent's method which combines both.
  • QWhat is linear convergence vs quadratic convergence?Mid-levelReveal
    Linear convergence: the error reduces by a constant factor each iteration. For bisection, error_n+1 ≈ 0.5 error_n (factor 1/2). Each iteration adds one correct binary digit. Quadratic convergence: the error is squared each iteration. Newton-Raphson has error_n+1 ≈ C error_n^2. This means the number of correct digits doubles each step. Example: if error is 0.1, with bisection next error is 0.05 (one more digit), with Newton next error is 0.01 (two more digits). That's why Newton takes ~5 iterations for 10^-10 compared to bisection's ~34.

Frequently Asked Questions

Can bisection find complex roots?

No — bisection requires a sign change, which doesn't apply to complex numbers. For complex roots, Newton-Raphson generalises naturally to the complex plane (Durand-Kerner method for polynomials).

What happens if the root lies exactly at the midpoint in a floating-point sense?

In theory it's fine — the algorithm will terminate quickly. In practice, floating-point precision may cause f(mid) not to be exactly zero even if mathematically it should. That's why interval-based termination is safer than function-value termination.

Can I use bisection for optimisation (finding extrema)?

Indirectly — you can find zeros of the derivative f'(x) if you can compute it. But then you're root-finding, not directly optimising. For optimisation, Golden Section Search is the analogue of bisection that works without derivatives.

Why does scipy.optimize.brentq exist if bisection is simpler?

Brent's method is both guaranteed to converge (like bisection) and typically faster (superlinear convergence). It's the best of both worlds. Implementing it correctly is tricky — scipy does it for you. You should almost never write your own bisection when brentq is available.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousNewton-Raphson Method — Root FindingNext →Numerical Integration — Simpson's Rule and Trapezoidal
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged