Home DSA Eigenvalues and Eigenvectors — Explained with Applications

Eigenvalues and Eigenvectors — Explained with Applications

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Linear Algebra → Topic 4 of 5
Learn eigenvalues and eigenvectors — what they mean geometrically, how to compute them, and why they underpin PCA, Google PageRank, vibration analysis, and quantum mechanics.
🔥 Advanced — solid DSA foundation required
In this tutorial, you'll learn:
  • Av = λv: eigenvector v is unchanged in direction by A, scaled by eigenvalue λ.
  • Eigenvalues from characteristic polynomial det(A-λI)=0. Eigenvectors from null space of (A-λI).
  • PCA: eigenvectors of covariance matrix = principal components. Eigenvalues = variance explained.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚡ Quick Answer
An eigenvector of a matrix is a direction that the matrix does not rotate — it only stretches or shrinks. The eigenvalue is how much it stretches. If you multiply a transformation matrix by its eigenvector, you get the same vector scaled by the eigenvalue. This 'invariant direction' concept appears everywhere: principal components in PCA, dominant webpage in PageRank, stable modes in structural engineering, and quantum states in physics.

Eigenvalues and eigenvectors are perhaps the most fundamental concept in applied linear algebra. Av = λv — a matrix A, when applied to its eigenvector v, produces the same vector scaled by eigenvalue λ. Finding the directions that a transformation preserves (up to scaling) reveals the essential structure of the transformation.

Google's original PageRank was the dominant eigenvector of the web's link matrix. PCA computes the eigenvectors of a covariance matrix to find principal components. Quantum states are eigenvectors of Hamiltonian operators. Structural resonant frequencies are eigenvalues of stiffness matrices. Understanding eigenvectors is understanding how to extract the 'essential directions' from a linear system.

Computing Eigenvalues and Eigenvectors

Eigenvalues: solve det(A - λI) = 0 (characteristic polynomial). Eigenvectors: for each λ, solve (A - λI)v = 0.

eigen.py · PYTHON
123456789101112131415
import numpy as np

A = np.array([[4,2],[1,3]], dtype=float)

# NumPy eigendecomposition
eigenvalues, eigenvectors = np.linalg.eig(A)
print('Eigenvalues:', eigenvalues)   # [5, 2]
print('Eigenvectors (columns):\n', eigenvectors)

# Verify: Av = λv
for i in range(len(eigenvalues)):
    v = eigenvectors[:, i]
    lam = eigenvalues[i]
    print(f'λ={lam:.1f}: Av={A@v}, λv={lam*v}')
    print(f'  Match: {np.allclose(A@v, lam*v)}')
▶ Output
Eigenvalues: [5. 2.]
Eigenvectors (columns):
[[0.894 0.707]
[0.447 -0.707]]
λ=5.0: Av=[4.472 2.236], λv=[4.472 2.236]
Match: True
λ=2.0: Av=[1.414 -1.414], λv=[1.414 -1.414]
Match: True

Power Iteration — Finding the Dominant Eigenvector

Power iteration finds the largest eigenvalue by repeatedly multiplying by A and normalising. This is the algorithm behind original PageRank.

power_iteration.py · PYTHON
12345678910111213141516171819
def power_iteration(A, n_iter=100, tol=1e-10):
    """Find dominant eigenvector (largest eigenvalue)."""
    import numpy as np
    n = len(A)
    v = np.random.rand(n)
    v /= np.linalg.norm(v)
    for _ in range(n_iter):
        v_new = A @ v
        eigenvalue = np.dot(v_new, v)  # Rayleigh quotient
        v_new /= np.linalg.norm(v_new)
        if np.linalg.norm(v_new - v) < tol:
            return eigenvalue, v_new
        v = v_new
    return eigenvalue, v

A = np.array([[4,2],[1,3]], dtype=float)
lam, v = power_iteration(A)
print(f'Dominant eigenvalue: {lam:.4f}')  # 5.0
print(f'Dominant eigenvector: {v}')
▶ Output
Dominant eigenvalue: 5.0000
Dominant eigenvector: [0.894 0.447]

🎯 Key Takeaways

  • Av = λv: eigenvector v is unchanged in direction by A, scaled by eigenvalue λ.
  • Eigenvalues from characteristic polynomial det(A-λI)=0. Eigenvectors from null space of (A-λI).
  • PCA: eigenvectors of covariance matrix = principal components. Eigenvalues = variance explained.
  • Power iteration: repeated A×v/‖Av‖ converges to dominant eigenvector — PageRank's original algorithm.
  • Symmetric matrices always have real eigenvalues and orthogonal eigenvectors — numpy.linalg.eigh for symmetric.

Interview Questions on This Topic

  • QWhat is an eigenvector geometrically?
  • QHow is PageRank related to eigenvectors?
  • QWhy do symmetric matrices have real eigenvalues?
  • QHow does PCA use eigenvectors?

Frequently Asked Questions

What is eigendecomposition vs SVD?

Eigendecomposition A = QΛQ^-1 requires A to be square and diagonalisable. SVD A = UΣV^T works for any matrix (including non-square, rank-deficient). SVD is more numerically stable and general. For rectangular matrices (overdetermined systems, data matrices in ML), always use SVD.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousLU DecompositionNext →Singular Value Decomposition — SVD
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged