Eigenvalues and Eigenvectors — Explained with Applications
- Av = λv: eigenvector v is unchanged in direction by A, scaled by eigenvalue λ.
- Eigenvalues from characteristic polynomial det(A-λI)=0. Eigenvectors from null space of (A-λI).
- PCA: eigenvectors of covariance matrix = principal components. Eigenvalues = variance explained.
Eigenvalues and eigenvectors are perhaps the most fundamental concept in applied linear algebra. Av = λv — a matrix A, when applied to its eigenvector v, produces the same vector scaled by eigenvalue λ. Finding the directions that a transformation preserves (up to scaling) reveals the essential structure of the transformation.
Google's original PageRank was the dominant eigenvector of the web's link matrix. PCA computes the eigenvectors of a covariance matrix to find principal components. Quantum states are eigenvectors of Hamiltonian operators. Structural resonant frequencies are eigenvalues of stiffness matrices. Understanding eigenvectors is understanding how to extract the 'essential directions' from a linear system.
Computing Eigenvalues and Eigenvectors
Eigenvalues: solve det(A - λI) = 0 (characteristic polynomial). Eigenvectors: for each λ, solve (A - λI)v = 0.
import numpy as np A = np.array([[4,2],[1,3]], dtype=float) # NumPy eigendecomposition eigenvalues, eigenvectors = np.linalg.eig(A) print('Eigenvalues:', eigenvalues) # [5, 2] print('Eigenvectors (columns):\n', eigenvectors) # Verify: Av = λv for i in range(len(eigenvalues)): v = eigenvectors[:, i] lam = eigenvalues[i] print(f'λ={lam:.1f}: Av={A@v}, λv={lam*v}') print(f' Match: {np.allclose(A@v, lam*v)}')
Eigenvectors (columns):
[[0.894 0.707]
[0.447 -0.707]]
λ=5.0: Av=[4.472 2.236], λv=[4.472 2.236]
Match: True
λ=2.0: Av=[1.414 -1.414], λv=[1.414 -1.414]
Match: True
Power Iteration — Finding the Dominant Eigenvector
Power iteration finds the largest eigenvalue by repeatedly multiplying by A and normalising. This is the algorithm behind original PageRank.
def power_iteration(A, n_iter=100, tol=1e-10): """Find dominant eigenvector (largest eigenvalue).""" import numpy as np n = len(A) v = np.random.rand(n) v /= np.linalg.norm(v) for _ in range(n_iter): v_new = A @ v eigenvalue = np.dot(v_new, v) # Rayleigh quotient v_new /= np.linalg.norm(v_new) if np.linalg.norm(v_new - v) < tol: return eigenvalue, v_new v = v_new return eigenvalue, v A = np.array([[4,2],[1,3]], dtype=float) lam, v = power_iteration(A) print(f'Dominant eigenvalue: {lam:.4f}') # 5.0 print(f'Dominant eigenvector: {v}')
Dominant eigenvector: [0.894 0.447]
🎯 Key Takeaways
- Av = λv: eigenvector v is unchanged in direction by A, scaled by eigenvalue λ.
- Eigenvalues from characteristic polynomial det(A-λI)=0. Eigenvectors from null space of (A-λI).
- PCA: eigenvectors of covariance matrix = principal components. Eigenvalues = variance explained.
- Power iteration: repeated A×v/‖Av‖ converges to dominant eigenvector — PageRank's original algorithm.
- Symmetric matrices always have real eigenvalues and orthogonal eigenvectors — numpy.linalg.eigh for symmetric.
Interview Questions on This Topic
- QWhat is an eigenvector geometrically?
- QHow is PageRank related to eigenvectors?
- QWhy do symmetric matrices have real eigenvalues?
- QHow does PCA use eigenvectors?
Frequently Asked Questions
What is eigendecomposition vs SVD?
Eigendecomposition A = QΛQ^-1 requires A to be square and diagonalisable. SVD A = UΣV^T works for any matrix (including non-square, rank-deficient). SVD is more numerically stable and general. For rectangular matrices (overdetermined systems, data matrices in ML), always use SVD.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.