Home DSA Gaussian Elimination — Solving Linear Systems

Gaussian Elimination — Solving Linear Systems

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Linear Algebra → Topic 2 of 5
Master Gaussian elimination — forward elimination, back substitution, partial pivoting, and LU decomposition.
⚙️ Intermediate — basic DSA knowledge assumed
In this tutorial, you'll learn:
  • Gaussian elimination: reduce to upper triangular form via row operations, then back-substitute. O(n³).
  • Partial pivoting: always swap to largest absolute pivot — prevents division by small numbers, bounds error amplification.
  • Without partial pivoting: numerically unstable, fails on singular or near-singular matrices.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚡ Quick Answer
Gaussian elimination solves systems of linear equations by systematically eliminating variables. For n equations with n unknowns, you reduce the system to upper triangular form (each equation has one fewer variable), then back-substitute from the last equation upward. It is the same process you learned in high school algebra — just systematised for computers.

Gaussian elimination is the foundational algorithm for solving linear systems Ax = b. It runs in O(n³) and, with partial pivoting, is numerically stable for most practical problems. It is what numpy.linalg.solve, scipy.linalg.solve, and every numerical linear algebra library use under the hood (via LU decomposition).

Understanding Gaussian elimination means understanding why partial pivoting matters (without it, small pivots cause numerical catastrophe), how LU decomposition factorises A for efficient multiple right-hand-side solving, and why direct methods (Gauss) are preferred over iterative methods (Jacobi, Gauss-Seidel) for dense systems.

Forward Elimination with Partial Pivoting

gaussian_elim.py · PYTHON
12345678910111213141516171819202122232425262728293031323334
def gaussian_elimination(A, b):
    """Solve Ax = b using Gaussian elimination with partial pivoting."""
    import copy
    n = len(A)
    # Augmented matrix [A|b]
    M = [list(A[i]) + [b[i]] for i in range(n)]
    for col in range(n):
        # Partial pivoting: swap row with maximum pivot element
        max_row = max(range(col, n), key=lambda r: abs(M[r][col]))
        M[col], M[max_row] = M[max_row], M[col]
        pivot = M[col][col]
        if abs(pivot) < 1e-12:
            raise ValueError('Matrix is singular or nearly singular')
        # Eliminate below pivot
        for row in range(col+1, n):
            factor = M[row][col] / pivot
            for j in range(col, n+1):
                M[row][j] -= factor * M[col][j]
    # Back substitution
    x = [0.0] * n
    for i in range(n-1, -1, -1):
        x[i] = M[i][n]
        for j in range(i+1, n):
            x[i] -= M[i][j] * x[j]
        x[i] /= M[i][i]
    return x

A = [[2,1,-1],[3,3,1],[1,-1,2]]
b = [8,13,1]
solution = gaussian_elimination(A, b)
print(f'x = {[round(v,4) for v in solution]}')
# Verify with numpy
import numpy as np
print(f'numpy: {np.linalg.solve(A, b)}')
▶ Output
x = [2.0, 3.0, -1.0]
numpy: [2. 3. -1.]

Why Partial Pivoting Matters

Without pivoting, Gaussian elimination fails when the pivot element is zero (division by zero) and becomes numerically unstable when the pivot is very small (amplifying rounding errors).

Example: A = [[0.001, 1],[1, 1]], b = [1, 2]. Without pivoting, the first pivot is 0.001 — the elimination multiplier is 1/0.001 = 1000. Floating point errors in the first row get amplified by 1000x. Partial pivoting swaps to use the row with the largest absolute value as pivot, keeping multipliers ≤ 1 and bounding error amplification.

🔥
Production Usagescipy.linalg.solve uses LAPACK's DGESV which implements partial pivoting Gaussian elimination. numpy.linalg.solve uses the same. Never implement raw Gaussian elimination without pivoting for production numerical code.

🎯 Key Takeaways

  • Gaussian elimination: reduce to upper triangular form via row operations, then back-substitute. O(n³).
  • Partial pivoting: always swap to largest absolute pivot — prevents division by small numbers, bounds error amplification.
  • Without partial pivoting: numerically unstable, fails on singular or near-singular matrices.
  • LU decomposition = Gaussian elimination stored as L (lower) and U (upper) factors — efficient for multiple right-hand sides.
  • Production: use numpy.linalg.solve or scipy.linalg.solve — never implement raw Gauss in production.

Interview Questions on This Topic

  • QWhy is partial pivoting necessary in Gaussian elimination?
  • QWhat is the time complexity and can it be improved?
  • QHow does LU decomposition relate to Gaussian elimination?

Frequently Asked Questions

When should I use iterative methods instead of Gaussian elimination?

For sparse systems (mostly zeros) with n > 10,000: iterative methods (Conjugate Gradient, GMRES) exploit sparsity for O(nnz × iterations) vs O(n³) for dense Gauss. For dense systems with n < 10,000: Gaussian elimination (LAPACK) is almost always fastest.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousMatrix Operations — Addition, Multiplication, TransposeNext →LU Decomposition
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged