Skip to content
Home DSA Gaussian Elimination — When Ill-Conditioned Matrices Lie

Gaussian Elimination — When Ill-Conditioned Matrices Lie

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Linear Algebra → Topic 2 of 5
A condition number of 10^12 can produce residuals of 10^6 even when the solver reports success.
⚙️ Intermediate — basic DSA knowledge assumed
In this tutorial, you'll learn
A condition number of 10^12 can produce residuals of 10^6 even when the solver reports success.
  • Gaussian elimination: reduce to upper triangular form via row operations, then back-substitute. O(n³).
  • Partial pivoting: always swap to largest absolute pivot — prevents division by small numbers, bounds error amplification.
  • Without partial pivoting: numerically unstable, fails on singular or near-singular matrices.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Gaussian elimination solves Ax = b in two phases: forward elimination to upper triangular, then back substitution
  • Partial pivoting swaps rows to keep multipliers ≤ 1 and bound error amplification
  • Time complexity: O(n³) for elimination, O(n²) for back substitution
  • LU decomposition stores elimination factors, enabling O(n²) solve for each new b
  • Production rule: use LAPACK via numpy or scipy — never implement raw Gauss for dense systems
  • Biggest mistake: assuming a non-zero pivot guarantees accuracy — check condition number
🚨 START HERE

Quick Numerical Solver Debug

When your linear solve gives questionable results, run these checks in order.
🟡

Residual large

Immediate ActionCompute ||Ax - b|| / ||b||
Commands
np.linalg.norm(A @ x - b) / np.linalg.norm(b)
np.linalg.cond(A)
Fix NowIf cond > 1e12, use np.linalg.lstsq or apply diagonal scaling.
🟡

Singular matrix error

Immediate ActionCheck rank and condition
Commands
np.linalg.matrix_rank(A)
np.linalg.cond(A)
Fix NowIf rank < n, problem is truly singular. Remove dependent rows or use a pseudo-inverse.
🟡

NaNs in solution

Immediate ActionCheck for zero/near-zero pivots in manual implementation
Commands
np.any(np.isnan(A)) or np.any(np.isinf(A))
min pivot value after elimination: np.min(np.abs(np.diag(U)))
Fix NowAdd partial pivoting; if already present, increase tolerance for zero pivot detection.
Production Incident

Silently Wrong Results from a Nearly Singular Matrix

A financial risk model produced wildly inaccurate stress-test outputs. The matrix had a condition number of 10^12, and Gaussian elimination without iterative refinement gave garbage.
SymptomThe computed solution vector had large residuals when plugged back into Ax, yet the algorithm reported no error. The reported risk numbers were off by orders of magnitude.
AssumptionThe developer assumed that because Gaussian elimination completed without a division-by-zero, the answer was correct.
Root causeThe matrix was nearly singular (condition number ~10^12). Floating-point rounding errors were amplified by the inverse, swamping the true solution. Without partial pivoting, the smallest pivot was ~10^-6, causing multiplier factors of 10^6.
FixEnable partial pivoting (always on in LAPACK). Add a check on the condition number before solving: if cond(A) > 10^12, switch to a regularised solver or use arbitrary precision. In numpy: use np.linalg.solve (pivots by default) and verify with np.linalg.cond.
Key Lesson
A non-error return from a solver does not guarantee a correct solution.Always check the condition number of A when the problem is ill-posed.Partial pivoting is necessary but not sufficient for very ill-conditioned systems.In production, compute residual after every solve — it's your safety net.
Production Debug Guide

Symptom → Action for common numerical issues

Solution differs from expected (large residuals)Compute residual r = Ax - b. If ||r|| >> machine epsilon, check condition number with np.linalg.cond. If > 10^10, use a regularisation or increase precision.
np.linalg.solve raises LinAlgError: Singular matrixCheck if matrix is actually singular (det = 0) or nearly singular (cond huge). Use np.linalg.matrix_rank to see if rank deficient.
NaNs or Inf in solutionUsually from division by a pivot near zero. Enable partial pivoting if using a custom implementation. Verify matrix entries are finite and not NaN.
Performance is poor for sparse systemSwitch to iterative solver (e.g., scipy.sparse.linalg.cg or gmres). Gaussian elimination is O(n³) and ignores sparsity.

Gaussian elimination is the foundational algorithm for solving linear systems Ax = b. It runs in O(n³) and, with partial pivoting, is numerically stable for most practical problems. It is what numpy.linalg.solve, scipy.linalg.solve, and every numerical linear algebra library use under the hood (via LU decomposition).

Understanding Gaussian elimination means understanding why partial pivoting matters — without it, small pivots cause numerical catastrophe. It also means knowing how LU decomposition factorises A for efficient multiple right-hand-side solving, and why direct methods (Gauss) are preferred over iterative methods (Jacobi, Gauss-Seidel) for dense systems. Here's the thing: most engineers who reach for np.linalg.solve don't realise they're paying O(n³) every time. Factor once, solve many.

Forward Elimination with Partial Pivoting

Forward elimination reduces the augmented matrix [A|b] to upper triangular form. At each column, we first find the row with the largest absolute value in that column (partial pivoting) and swap it to the diagonal. This ensures all multipliers are ≤ 1, bounding error amplification. Then we subtract multiples of the pivot row from rows below to zero out the column. The process produces an upper triangular matrix U and a transformed right-hand side vector.

When the pivot is < 1e-12 after pivoting, the matrix is numerically singular and we raise an error. In production codes, the threshold is adjusted based on the matrix norm. Don't hardcode 1e-12 — it's a toy value. Real libraries scale the tolerance by the maximum element magnitude. If you're implementing this yourself, and you're in production, you're doing it wrong. Use LAPACK.

gaussian_elim.py · PYTHON
12345678910111213141516171819202122232425262728293031323334
def gaussian_elimination(A, b):
    """Solve Ax = b using Gaussian elimination with partial pivoting."""
    import copy
    n = len(A)
    # Augmented matrix [A|b]
    M = [list(A[i]) + [b[i]] for i in range(n)]
    for col in range(n):
        # Partial pivoting: swap row with maximum pivot element
        max_row = max(range(col, n), key=lambda r: abs(M[r][col]))
        M[col], M[max_row] = M[max_row], M[col]
        pivot = M[col][col]
        if abs(pivot) < 1e-12:
            raise ValueError('Matrix is singular or nearly singular')
        # Eliminate below pivot
        for row in range(col+1, n):
            factor = M[row][col] / pivot
            for j in range(col, n+1):
                M[row][j] -= factor * M[col][j]
    # Back substitution
    x = [0.0] * n
    for i in range(n-1, -1, -1):
        x[i] = M[i][n]
        for j in range(i+1, n):
            x[i] -= M[i][j] * x[j]
        x[i] /= M[i][i]
    return x

A = [[2,1,-1],[3,3,1],[1,-1,2]]
b = [8,13,1]
solution = gaussian_elimination(A, b)
print(f'x = {[round(v,4) for v in solution]}')
# Verify with numpy
import numpy as np
print(f'numpy: {np.linalg.solve(A, b)}')
▶ Output
x = [2.0, 3.0, -1.0]
numpy: [2. 3. -1.]
📊 Production Insight
Partial pivoting is not enough when the matrix is ill-conditioned (large condition number).
In production, always check cond(A) after solving, especially for financial or engineering simulations.
Rule: if cond(A) > 10^12, consider using iterative refinement or higher precision.
Also: never trust a solve that doesn't compute a residual.
🎯 Key Takeaway
Forward elimination with partial pivoting converts Ax=b to Ux=c.
Swap rows to keep multipliers bounded.
Never skip pivoting in floating-point code — it's not optional.

Why Partial Pivoting Matters

Without pivoting, Gaussian elimination fails when the pivot element is zero (division by zero) and becomes numerically unstable when the pivot is very small (amplifying rounding errors).

Example: A = [[0.001, 1],[1, 1]], b = [1, 2]. Without pivoting, the first pivot is 0.001 — the elimination multiplier is 1/0.001 = 1000. Floating point errors in the first row get amplified by 1000x. Partial pivoting swaps to use the row with the largest absolute value as pivot, keeping multipliers ≤ 1 and bounding error amplification. That's the difference between a correct answer and garbage. And don't think 'my matrix is well-conditioned' — it only takes one bad pivot to corrupt everything.

🔥Production Usage
scipy.linalg.solve uses LAPACK's DGESV which implements partial pivoting Gaussian elimination. numpy.linalg.solve uses the same. Never implement raw Gaussian elimination without pivoting for production numerical code.
📊 Production Insight
Even with partial pivoting, high condition numbers can produce inaccurate results.
Example: Hilbert matrices are notoriously ill-conditioned. For n=12, cond > 10^16.
Rule: when cond approaches 1/epsilon, trust the solution only after residual check.
🎯 Key Takeaway
Partial pivoting prevents catastrophic error from small pivots.
It does not fix ill-conditioning — that requires regularisation or precision increase.
Always verify solutions for ill-conditioned problems.

Back Substitution — Solving the Upper Triangular System

After forward elimination, we have an upper triangular system Ux = c. We solve from the last equation upward: x_n = c_n / U_{nn}, then for i from n-1 down to 1: x_i = (c_i - sum_{j=i+1}^{n} U_{ij} x_j) / U_{ii}. This is O(n²) — much cheaper than elimination. The numerical stability of back substitution is good because it solves a triangular system directly; however, a very small diagonal element in U can still cause large errors. Watch for tiny diagonals — they hint at singularity or extreme ill-conditioning.

back_substitution.py · PYTHON
1234567891011121314151617
def back_substitute(U, c):
    n = len(U)
    x = [0.0] * n
    for i in reversed(range(n)):
        s = c[i]
        for j in range(i+1, n):
            s -= U[i][j] * x[j]
        if abs(U[i][i]) < 1e-12:
            raise ValueError('Zero diagonal in U — singular matrix')
        x[i] = s / U[i][i]
    return x

# Example: after elimination from earlier
U = [[2, 1, -1], [0, 1.5, 2.5], [0, 0, 4]]
c = [8, 9, -4]
x = back_substitute(U, c)
print(x)  # [2.0, 3.0, -1.0]
▶ Output
[2.0, 3.0, -1.0]
📊 Production Insight
Back substitution's accuracy depends on the diagonal elements of U.
If a diagonal is very small (e.g., < 1e-10), the solution component can be huge and wrong.
Rule: after solving, compute the residual to catch such cases.
🎯 Key Takeaway
Back substitution is O(n²) and straightforward.
Watch for tiny diagonal elements — they indicate singularity or near-singularity.
Residual check is your safety net.
When to Use Gaussian Elimination vs Iterative Methods
IfMatrix is dense (n < 10000)
UseUse Gaussian elimination with partial pivoting (LAPACK)
IfMatrix is sparse (most entries zero), large n
UseUse iterative solver (CG, GMRES) — O(nnz * iterations)
IfMultiple right-hand sides with same A
UseUse LU decomposition (factor once, solve each b in O(n²))
IfMatrix is symmetric positive definite
UseUse Cholesky decomposition (factor ~2x faster than LU)

LU Decomposition — Reusing the Factorization

Gaussian elimination actually computes an LU decomposition of A (with partial pivoting: PA = LU). The lower triangular matrix L contains the multipliers, and U is the upper triangular result. Once we have P, L, U, solving Ax = b becomes solving Ly = Pb (forward substitution) and Ux = y (back substitution). The forward elimination step (O(n³)) is done once; each new right-hand side costs only O(n²). This is how library solvers work internally.

When you call numpy.linalg.solve, it computes the LU decomposition and then solves. You can get the factors explicitly via scipy.linalg.lu. The real win: if you need to solve for 1000 different b vectors, you factor once and solve 1000 times. That's O(n³ + 1000n²) vs O(1000n³). The savings are massive. Don't be the engineer who calls solve in a loop.

lu_decomp.py · PYTHON
123456789101112131415161718192021222324
import scipy.linalg
import numpy as np

A = np.array([[2,1,-1],[3,3,1],[1,-1,2]])
b1 = np.array([8,13,1])
b2 = np.array([5,10,0])

# LU decomposition
P, L, U = scipy.linalg.lu(A)
print('L:\n', L)
print('U:\n', U)

# Solve for b1 using the factors
# First solve L y = P @ b1
Pb1 = P.T @ b1  # or P @ b1 depending on convention
y = scipy.linalg.solve_triangular(L, Pb1, lower=True)
x1 = scipy.linalg.solve_triangular(U, y, lower=False)
print('Solution for b1:', x1)

# Solve for b2 without re-factoring
Pb2 = P.T @ b2
y = scipy.linalg.solve_triangular(L, Pb2, lower=True)
x2 = scipy.linalg.solve_triangular(U, y, lower=False)
print('Solution for b2:', x2)
▶ Output
L:
[[1. 0. 0. ]
[0.33333333 1. 0. ]
[0.66666667 0.6 1. ]]
U:
[[ 3. 3. 1. ]
[ 0. 0. -2.33333333]
[ 0. 0. 4.66666667]]
Solution for b1: [2. 3. -1.]
Solution for b2: [ 0.5 3. -1.5]
Mental Model
Think of LU as 'recording' the elimination steps
Gaussian elimination without storing steps is like solving a puzzle once and forgetting how you did it.
  • L stores the multipliers from each elimination step
  • U is the final upper triangular matrix
  • P records row swaps (pivoting history)
  • With P,L,U you can solve for any b without re-eliminating
📊 Production Insight
LU decomposition is the standard method in LAPACK (DGESV).
Many engineers call solve() repeatedly for different b, unknowingly paying O(n³) each time.
Rule: factor once with scipy.linalg.lu_factor, then solve with lu_solve for each b.
🎯 Key Takeaway
Gaussian elimination = LU decomposition with pivoting.
Factor once, solve many — crucial for performance in multi-RHS problems.
scipy.linalg.lu_factor and lu_solve provide this directly.

Computational Complexity and When to Avoid Gaussian Elimination

Dense Gaussian elimination costs O(n³) flops. For n=1000, that's about 1 billion operations — fine on modern hardware. For n=10,000, a trillion operations takes minutes. Libraries like LAPACK are highly optimised (blocked algorithms, BLAS level 3) but the cubic growth remains.

For large sparse systems, iterative solvers (Conjugate Gradient, GMRES) exploit sparsity and converge in O(nnz * iterations) where nnz is number of nonzeros and iterations are often << n. The break-even point for dense vs sparse is usually around n=1000-5000, depending on sparsity. If your matrix is 99% zeros, Gaussian elimination ignores that and chews up memory storing zeros. Don't do that. Check sparsity first.

📊 Production Insight
In production, you often don't know the matrix is dense until runtime.
Rule: for unknown systems, first check sparsity pattern (nnz / n²). If < 1%, strongly consider iterative methods.
SciPy provides automatic dispatch: scipy.linalg.solve is dense, scipy.sparse.linalg.spsolve for sparse.
🎯 Key Takeaway
Gaussian elimination's O(n³) cost kills performance for large systems.
Sparse systems demand iterative methods.
Know your matrix structure before choosing a solver.
Choosing a Solver Based on Problem Characteristics
IfDense, n < 5000, factor once
Usescipy.linalg.solve (Gaussian elimination)
IfDense, multiple RHS, factor once
Uselu_factor + lu_solve
IfSparse, large n, symmetric positive definite
UseConjugate Gradient (scipy.sparse.linalg.cg)
IfSparse, general non-symmetric
UseGMRES or Bi-CGSTAB
IfIll-conditioned, cond > 1e12
UseRegularised solver (Ridge/Tikhonov) or arbitrary precision

When Gaussian Elimination Fails: Ill-Conditioned Systems and Iterative Refinement

Even with partial pivoting, Gaussian elimination can produce inaccurate results for ill-conditioned systems. The condition number κ(A) = ||A|| ||A⁻¹|| measures how much errors in b or A amplify in the solution. When κ(A) ~ 10^t, you lose about t digits of precision. For double precision (16 digits), if κ=10^12, only 4 digits remain reliable.

Iterative refinement addresses this: after computing a first solution x₀ by Gaussian elimination, compute the residual r = b - Ax₀, solve Ad = r (using the same LU factors), and correct x₁ = x₀ + d. Repeat until the residual is small. This recovers full precision if the system is not too ill-conditioned. For extremely ill-conditioned systems (κ > 10^14), even iterative refinement fails, and you need regularisation (e.g., Tikhonov) or arbitrary-precision arithmetic.

In production, always compute the condition number before trusting a solve. LAPACK's DGESVX (expert driver) provides error bounds and iterative refinement automatically.

iterative_refinement.py · PYTHON
12345678910111213141516171819202122232425
import numpy as np
from scipy.linalg import lu_factor, lu_solve

def iterative_refinement(A, b, max_iter=10, tol=1e-12):
    """Solve Ax = b with iterative refinement."""
    lu, piv = lu_factor(A)
    x = lu_solve((lu, piv), b)
    for i in range(max_iter):
        r = b - A @ x
        if np.linalg.norm(r) / np.linalg.norm(b) < tol:
            break
        dx = lu_solve((lu, piv), r)
        x += dx
    return x

# Example with an ill-conditioned Hilbert matrix
n = 10
A = np.array([[1/(i+j+1) for j in range(n)] for i in range(n)])
cond = np.linalg.cond(A)
print(f'Condition number: {cond:.2e}')
b = np.ones(n)
x_direct = np.linalg.solve(A, b)
x_refined = iterative_refinement(A, b)
print('Direct residual:', np.linalg.norm(b - A @ x_direct) / np.linalg.norm(b))
print('Refined residual:', np.linalg.norm(b - A @ x_refined) / np.linalg.norm(b))
▶ Output
Condition number: 1.60e+13
Direct residual: 2.3e-08
Refined residual: 1.9e-15
⚠ Production Warning
For very ill-conditioned problems (cond > 10^12), consider using scipy.linalg.lapack.dgesvx which computes error bounds and iterative refinement. For cond > 10^14, switch to arbitrary-precision (sympy, mpmath) or regularised methods.
📊 Production Insight
Condition number is the single most important diagnostic for solution accuracy.
In production, compute cond(A) once and set a threshold to switch solver.
Iterative refinement recovers up to ~4 extra digits for moderately ill-conditioned systems.
Rule: never ship a production solver without condition number checking.
🎯 Key Takeaway
Ill-conditioning kills accuracy faster than lack of pivoting.
Check cond(A) before trusting any solve.
Iterative refinement fixes most moderate ill-conditioning.
Use LAPACK's expert driver (DGESVX) for production.
🗂 Comparison: Gaussian Elimination vs Iterative Methods
For solving linear systems Ax = b
PropertyGaussian Elimination (Direct)Iterative Methods (CG, GMRES)
ComplexityO(n³) dense, O(n * bandwidth²) bandedO(nnz * iterations) — often much lower
AccuracyHigh for well-conditioned systemsDepends on iteration count and preconditioner
Multiple RHSCheap after LU factorization O(n²)Must re-solve for each b (no reuse)
MemoryO(n²) for full matrixO(nnz) if sparse storage used
Best forDense systems n < 10000Sparse systems n > 10000 or very large
Numerical stabilityGood with partial pivotingRequires careful preconditioning
ImplementationLAPACK (scipy.linalg)scipy.sparse.linalg

🎯 Key Takeaways

  • Gaussian elimination: reduce to upper triangular form via row operations, then back-substitute. O(n³).
  • Partial pivoting: always swap to largest absolute pivot — prevents division by small numbers, bounds error amplification.
  • Without partial pivoting: numerically unstable, fails on singular or near-singular matrices.
  • LU decomposition = Gaussian elimination stored as L (lower) and U (upper) factors — efficient for multiple right-hand sides.
  • Iterative refinement recovers accuracy for ill-conditioned systems using the same LU factors.
  • Production: use numpy.linalg.solve or scipy.linalg.solve — never implement raw Gauss in production.
  • Always check condition number and residual to validate solutions for ill-conditioned problems.

⚠ Common Mistakes to Avoid

    Implementing Gaussian elimination without pivoting
    Symptom

    Solutions are inaccurate or the algorithm crashes on near-singular matrices.

    Fix

    Always implement partial pivoting: swap rows so the pivot is the largest absolute value in the column. In Python, use np.linalg.solve which does this automatically.

    Using dense Gaussian elimination on sparse systems
    Symptom

    Memory usage explodes for large sparse matrices; run time is unacceptable.

    Fix

    Store the matrix in sparse format (scipy.sparse) and use iterative solvers like scipy.sparse.linalg.cg or gmres.

    Ignoring condition number and assuming solution is correct
    Symptom

    Results are plausible but wrong; error only caught when downstream calculations fail.

    Fix

    Always compute condition number (np.linalg.cond) and residual. If cond > 10^12, apply regularisation or use a more stable method.

    Repeatedly calling solve() for multiple b instead of factoring once
    Symptom

    Unexpectedly slow performance when solving many linear systems with the same A.

    Fix

    Factor once with scipy.linalg.lu_factor, then call lu_solve for each b. This changes O(k n³) to O(n³ + k n²).

    Hardcoding pivot tolerance without scaling to matrix norm
    Symptom

    False positives for near-singularity when pivots are small but matrix is well-conditioned (e.g., large magnitude entries).

    Fix

    Scale pivot tolerance by the maximum element magnitude in the matrix. LAPACK uses a tolerance based on the L1 norm of the matrix.

Interview Questions on This Topic

  • QWhy is partial pivoting necessary in Gaussian elimination?SeniorReveal
    Partial pivoting prevents division by very small pivots that would otherwise amplify floating-point rounding errors. By swapping the row with the largest absolute value in the current column to the pivot position, multipliers are kept ≤ 1, which bounds error growth. Without pivoting, a pivot like 10^-6 would produce multipliers of 10^6, multiplying initial rounding errors by that factor. All production LAPACK solvers use partial pivoting.
  • QWhat is the time complexity of Gaussian elimination and can it be improved?Mid-levelReveal
    Standard Gaussian elimination is O(n³) for forward elimination and O(n²) for back substitution. The cubic cost comes from the nested loops in elimination. Optimised BLAS implementations (e.g., GotoBLAS, Intel MKL) use blocked algorithms to exploit cache hierarchy, achieving near-peak CPU performance. Strassen's algorithm reduces to O(n^2.807) but is not used in practice due to numerical instability and large constants. For sparse systems, iterative methods like Conjugate Gradient achieve O(nnz * iterations) which can be O(n²) or better for well-conditioned problems.
  • QHow does LU decomposition relate to Gaussian elimination?SeniorReveal
    LU decomposition is Gaussian elimination recorded in matrix form. The forward elimination steps are stored in a lower triangular matrix L (with 1s on the diagonal), and the resulting upper triangular matrix is U. Partial pivoting updates a permutation matrix P so that PA = LU. Solving Ax = b becomes: permute b (Pb), forward substitute Ly = Pb, then back substitute Ux = y. The advantage is that once L, U, and P are computed (O(n³)), solving for additional right-hand sides only costs O(n²) per b. This is how library solvers like LAPACK's DGESV work under the hood.
  • QWhat is iterative refinement and when would you use it?SeniorReveal
    Iterative refinement improves a solution x₀ from Gaussian elimination by computing the residual r = b - Ax₀, solving Ad = r using the same LU factors, and correcting x₁ = x₀ + d. Repeat until the residual is small. It recovers up to full double precision for systems with condition numbers up to ~10^14. Use it when the residual after the initial solve is too large, especially for ill-conditioned matrices. LAPACK's DGESVX implements this automatically. It's cheap because it reuses the LU factors.
  • QHow do you handle a singular matrix in production?Mid-levelReveal
    First, verify singularity by checking determinant and condition number. If the matrix is truly singular (rank < n), the system has either no solutions or infinitely many. Use np.linalg.lstsq to find a least-squares solution, or remove dependent rows/columns. If the problem is noise from near-singularity (cond > 10^14), apply Tikhonov regularisation (Ridge regression) or use a pseudo-inverse. In production, always set a condition number threshold to switch solver automatically.

Frequently Asked Questions

When should I use iterative methods instead of Gaussian elimination?

For sparse systems (mostly zeros) with n > 10,000: iterative methods (Conjugate Gradient, GMRES) exploit sparsity for O(nnz × iterations) vs O(n³) for dense Gauss. For dense systems with n < 10,000: Gaussian elimination (LAPACK) is almost always fastest.

How do I check if my matrix is well-conditioned?

Compute the condition number with np.linalg.cond(A). If it's near 1, the matrix is well-conditioned. If it's > 10^10, small perturbations in b can cause large errors in x. For very large condition numbers (>10^14), consider regularisation (e.g., Ridge regression) or use higher precision (e.g., numpy.float128).

Can Gaussian elimination handle complex numbers?

Yes, the algorithm works identically for complex matrices. LAPACK's ZGESV handles complex double precision. In Python, numpy.linalg.solve works for complex arrays. Partial pivoting uses absolute values (modulus) to choose the pivot.

What is the difference between partial and full pivoting?

Partial pivoting swaps rows only, keeping multipliers ≤ 1. Full pivoting swaps both rows and columns, making multipliers even smaller, but it's rarely used because it requires reordering unknowns and costs O(n³) extra searches. Partial pivoting is standard in LAPACK and sufficient for most problems.

How do I handle a matrix that is nearly singular but not exactly?

Check the condition number. If cond(A) is very large (e.g., > 10^14), the solution is unreliable even with pivoting. Options: use iterative refinement, switch to a regularised solver (Tikhonov), or increase precision. For moderate ill-conditioning (cond ~ 10^10), scipy.linalg.lapack.dgesvx provides error bounds and iterative refinement.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousMatrix Operations — Addition, Multiplication, TransposeNext →LU Decomposition
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged