Home ML / AI Regularisation in Machine Learning: L1, L2 and Why Your Model Overfits

Regularisation in Machine Learning: L1, L2 and Why Your Model Overfits

In Plain English 🔥
Imagine you're cramming for a test by memorising every single practice question word-for-word instead of learning the underlying concepts. You ace the practice paper but bomb the real exam because the questions are slightly different. That's overfitting — your model memorised the training data instead of learning the pattern. Regularisation is like your teacher saying 'stop memorising, start understanding' — it adds a penalty that forces the model to stay simple and generalise better to new data.
⚡ Quick Answer
Imagine you're cramming for a test by memorising every single practice question word-for-word instead of learning the underlying concepts. You ace the practice paper but bomb the real exam because the questions are slightly different. That's overfitting — your model memorised the training data instead of learning the pattern. Regularisation is like your teacher saying 'stop memorising, start understanding' — it adds a penalty that forces the model to stay simple and generalise better to new data.

Every machine learning model has the same enemy: a model that looks brilliant on training data but falls apart the moment it sees real-world data. This isn't a rare edge case — it's the default failure mode. Left unchecked, models will cheerfully learn noise, flukes, and irrelevant patterns in your training set. In production, that translates to bad predictions and real business costs.

The root cause is that training a model is fundamentally an optimisation problem. The algorithm tries to minimise error on the data it can see. Without any guardrails, it'll find increasingly complex solutions that fit every quirk of the training set perfectly — but those quirks don't exist in the wild. Regularisation solves this by adding a penalty term to the loss function that punishes complexity itself. The model now has to balance two things at once: fit the data well AND stay simple.

By the end of this article you'll understand exactly why overfitting happens, what L1 and L2 regularisation actually do to your model's weights (not just the formula — the intuition), how to tune the regularisation strength with lambda, and how to pick the right type for your specific problem. You'll leave with working Python code you can drop straight into your own projects.

Why Models Overfit — and What Regularisation Actually Does

To understand regularisation, you first need a crisp mental model of overfitting. When you train a model, you're adjusting weights to minimise a loss function like Mean Squared Error. An unconstrained model will keep pushing weights to extreme values if doing so reduces training loss — even by a tiny amount. Those extreme weights capture noise that only exists in your training batch.

Here's the key insight: large weights are the symptom of overfitting. A weight of 847.3 on a feature means your model is hyper-sensitive to tiny changes in that feature. That's almost never justified by real-world signal.

Regularisation works by adding an extra term to the loss function:

Regularised Loss = Original Loss + λ × Penalty

The penalty is a function of the weights themselves. Now, the optimiser can't just chase lower training loss recklessly — every time it pushes a weight higher to fit the training data better, the penalty term pushes back. Lambda (λ) controls how aggressive that pushback is. A higher lambda means stronger regularisation, simpler model. A lambda of zero means no regularisation at all — back to overfitting territory.

This is why regularisation is sometimes called 'weight decay' — it actively decays weights toward zero during training.

overfitting_demo.py · PYTHON
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.metrics import mean_squared_error

np.random.seed(42)

# --- Generate a simple dataset: true pattern is quadratic, but we add noise ---
# Think of this as house prices vs size — there's a real trend, plus random noise
num_samples = 30
house_sizes = np.linspace(50, 300, num_samples)
true_prices = 0.5 * house_sizes**2 - 50 * house_sizes + 8000  # the real pattern
noise = np.random.normal(0, 3000, num_samples)                  # market noise
observed_prices = true_prices + noise

# Reshape for sklearn (needs 2D input)
house_sizes_2d = house_sizes.reshape(-1, 1)

# --- Fit three models: underfitting, overfitting, and regularised ---

# Degree-1: too simple, misses the curve (underfitting)
linear_model = make_pipeline(PolynomialFeatures(degree=1), LinearRegression())
linear_model.fit(house_sizes_2d, observed_prices)

# Degree-10: so flexible it chases every noise spike (overfitting)
overfitted_model = make_pipeline(PolynomialFeatures(degree=10), LinearRegression())
overfitted_model.fit(house_sizes_2d, observed_prices)

# Degree-10 with Ridge regularisation: flexible but penalised for large weights
ridge_model = make_pipeline(PolynomialFeatures(degree=10), Ridge(alpha=1000))
ridge_model.fit(house_sizes_2d, observed_prices)

# --- Evaluate on training data ---
plot_range = np.linspace(50, 300, 300).reshape(-1, 1)

linear_train_rmse   = mean_squared_error(observed_prices, linear_model.predict(house_sizes_2d),   squared=False)
overfit_train_rmse  = mean_squared_error(observed_prices, overfitted_model.predict(house_sizes_2d), squared=False)
ridge_train_rmse    = mean_squared_error(observed_prices, ridge_model.predict(house_sizes_2d),    squared=False)

print("=== Training RMSE Comparison ===")
print(f"Linear (degree 1)        : £{linear_train_rmse:,.0f}")
print(f"Overfitted (degree 10)   : £{overfit_train_rmse:,.0f}  <- near-zero, but it cheated")
print(f"Ridge regularised (d=10) : £{ridge_train_rmse:,.0f}  <- honest fit")

# Inspect the overfitted model's weights — they'll be enormous
overfitted_coefficients = overfitted_model.named_steps['linearregression'].coef_
ridge_coefficients      = ridge_model.named_steps['ridge'].coef_

print("\n=== Weight Magnitude Check ===")
print(f"Max absolute weight (overfitted) : {np.max(np.abs(overfitted_coefficients)):,.2f}")
print(f"Max absolute weight (Ridge)      : {np.max(np.abs(ridge_coefficients)):,.2f}")
print("\nRegularisation shrank those runaway weights dramatically!")
▶ Output
=== Training RMSE Comparison ===
Linear (degree 1) : £4,821
Overfitted (degree 10) : £1,203 <- near-zero, but it cheated
Ridge regularised (d=10) : £3,109 <- honest fit

=== Weight Magnitude Check ===
Max absolute weight (overfitted) : 1,842,763.18
Max absolute weight (Ridge) : 312.47

Regularisation shrank those runaway weights dramatically!
🔥
The Core Insight:The overfitted model's training RMSE is lower — that looks like a win. But its weights are over a million times larger than the regularised model's. Those giant weights are a red flag: the model is memorising, not learning. Always check weight magnitudes alongside training loss.

L1 vs L2 Regularisation — The Real Difference That Matters in Practice

Both L1 (Lasso) and L2 (Ridge) add a penalty term to the loss function, but the penalty is calculated differently — and that difference has profound practical consequences.

L2 (Ridge) penalises the sum of squared weights: λ × Σ(wᵢ²). Because squaring a large weight makes it hugely expensive, Ridge aggressively shrinks big weights toward zero but rarely all the way to zero. Every feature keeps some influence — Ridge just democratises the weights, keeping things balanced.

L1 (Lasso) penalises the sum of absolute weights: λ × Σ|wᵢ|. The key difference: L1's penalty slope is constant regardless of weight size. This creates a fundamentally different optimisation landscape where the algorithm finds it genuinely cheaper to drive some weights exactly to zero rather than keep them small. The result is automatic feature selection.

Think of it this way: Ridge is like turning down the volume on all instruments equally. Lasso is like removing some instruments from the band entirely.

When to use which? Use Ridge when you believe most features carry some real signal — like predicting house prices where size, location, and age all matter. Use Lasso when you suspect many features are noise and you want the model to identify the useful ones — like gene expression data with thousands of genes but only dozens that matter. Elastic Net blends both penalties and is the safest default when you're unsure.

l1_vs_l2_feature_selection.py · PYTHON
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
import numpy as np
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_regression

np.random.seed(0)

# --- Create a dataset where only 5 of 20 features are genuinely useful ---
# This simulates a real scenario: many candidate features, few real signals
feature_matrix, target_values, true_coefficients = make_regression(
    n_samples=200,
    n_features=20,        # 20 features total
    n_informative=5,      # only 5 actually drive the outcome
    noise=25,
    coef=True,
    random_state=0
)

# IMPORTANT: Always scale features before regularisation!
# Regularisation penalises weight magnitude — if Feature A is in metres and
# Feature B is in millimetres, Feature B will be unfairly penalised.
scaler = StandardScaler()
feature_matrix_scaled = scaler.fit_transform(feature_matrix)

# --- Train all three regularisation types with the same lambda strength ---
regularisation_strength = 1.0

ridge_model    = Ridge(alpha=regularisation_strength)
lasso_model    = Lasso(alpha=regularisation_strength, max_iter=10000)
elastic_model  = ElasticNet(alpha=regularisation_strength, l1_ratio=0.5, max_iter=10000)

ridge_model.fit(feature_matrix_scaled, target_values)
lasso_model.fit(feature_matrix_scaled, target_values)
elastic_model.fit(feature_matrix_scaled, target_values)

# --- Compare how many features each model zeroed out ---
ridge_zeros   = np.sum(np.abs(ridge_model.coef_)   < 0.01)
lasso_zeros   = np.sum(np.abs(lasso_model.coef_)   < 0.01)  # true zeroes
elastic_zeros = np.sum(np.abs(elastic_model.coef_) < 0.01)

print("=== Feature Sparsity Comparison (20 features total) ===")
print(f"Ridge    — features effectively zeroed: {ridge_zeros:>2}  (keeps most features active)")
print(f"Lasso    — features exactly zeroed    : {lasso_zeros:>2}  (built-in feature selection!)")
print(f"ElasticNet — features zeroed          : {elastic_zeros:>2}  (balanced approach)")

# --- Show which features Lasso kept (non-zero weights) ---
lasso_selected_features = np.where(np.abs(lasso_model.coef_) >= 0.01)[0]
print(f"\nLasso selected feature indices: {lasso_selected_features}")
print(f"True informative feature indices: {np.where(np.abs(true_coefficients) > 0)[0]}")

# --- Print weight table for first 10 features ---
print("\n--- Weight comparison for features 0–9 ---")
print(f"{'Feature':<10} {'True Coef':>12} {'Ridge':>10} {'Lasso':>10} {'ElasticNet':>12}")
print("-" * 56)
for i in range(10):
    print(f"Feature {i:<3} {true_coefficients[i]:>12.2f} "
          f"{ridge_model.coef_[i]:>10.2f} "
          f"{lasso_model.coef_[i]:>10.2f} "
          f"{elastic_model.coef_[i]:>12.2f}")
▶ Output
=== Feature Sparsity Comparison (20 features total) ===
Ridge — features effectively zeroed: 0 (keeps most features active)
Lasso — features exactly zeroed : 15 (built-in feature selection!)
ElasticNet — features zeroed : 9 (balanced approach)

Lasso selected feature indices: [0 1 4 7 15]
True informative feature indices: [0 1 4 7 15]

--- Weight comparison for features 0–9 ---
Feature True Coef Ridge Lasso ElasticNet
--------------------------------------------------------
Feature 0 45.23 38.71 41.05 36.82
Feature 1 28.17 24.93 25.61 22.14
Feature 2 0.00 1.83 0.00 0.00
Feature 3 0.00 2.41 0.00 0.00
Feature 4 67.88 59.12 63.74 57.93
Feature 5 0.00 3.17 0.00 0.00
Feature 6 0.00 -1.94 0.00 0.00
Feature 7 33.55 29.48 30.92 27.61
Feature 8 0.00 2.08 0.00 -0.00
Feature 9 0.00 -1.62 0.00 0.00
⚠️
Pro Tip: Lasso as a Feature Selection ToolNotice Lasso perfectly identified all 5 truly informative features and set all 15 noise features to exactly zero. In high-dimensional problems (medical data, NLP, genomics), run Lasso first as a feature screening step, then train your final model on just those selected features — even if your final model is a Random Forest or XGBoost that doesn't use regularisation itself.

Tuning Lambda — How to Find the Right Regularisation Strength

Lambda (α in sklearn) is the most important hyperparameter in regularisation. Set it too low and you barely constrain the model — overfitting creeps back in. Set it too high and you've penalised the model into uselessness, underfitting everything.

The gold standard approach is cross-validated search: train the model with many different lambda values, evaluate each on held-out validation folds, and pick the lambda that minimises validation error. Sklearn's RidgeCV and LassoCV do this efficiently, testing a grid of lambdas in a single call.

The validation curve is your most important diagnostic tool here. Plot training error and validation error against lambda values. You're looking for the lambda where the gap between training and validation error is smallest — that's your sweet spot. Too far left (small lambda): gap is wide — overfitting. Too far right (large lambda): both errors are high — underfitting.

One practical rule of thumb: start with a logarithmic search space (0.001, 0.01, 0.1, 1, 10, 100) rather than a linear one. Regularisation effects are roughly log-linear, so equal spacing on a log scale gives you much more informative coverage of the lambda landscape.

lambda_tuning_crossval.py · PYTHON
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
import numpy as np
from sklearn.linear_model import RidgeCV, LassoCV
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error

np.random.seed(7)

# --- Dataset: predicting patient recovery scores from clinical measurements ---
clinical_features, recovery_scores = make_regression(
    n_samples=500,
    n_features=30,
    n_informative=10,
    noise=40,
    random_state=7
)

# Split into train and held-out test set
train_features, test_features, train_scores, test_scores = train_test_split(
    clinical_features, recovery_scores, test_size=0.2, random_state=7
)

# Scale BEFORE fitting — fit scaler on train only to avoid data leakage
scaler = StandardScaler()
train_features_scaled = scaler.fit_transform(train_features)
test_features_scaled  = scaler.transform(test_features)  # transform only, don't refit

# --- Define lambda search space on a log scale ---
# np.logspace(start, stop, num) → 10^start to 10^stop evenly in log space
lambda_candidates = np.logspace(-3, 4, 100)  # 0.001 to 10,000, 100 values

# --- RidgeCV: tries all lambdas with cross-validation, picks the best automatically ---
ridge_cv = RidgeCV(
    alphas=lambda_candidates,
    cv=5,                   # 5-fold cross-validation
    scoring='neg_mean_squared_error'
)
ridge_cv.fit(train_features_scaled, train_scores)

# --- LassoCV: same idea but with coordinate descent convergence ---
lasso_cv = LassoCV(
    alphas=lambda_candidates,
    cv=5,
    max_iter=10000,
    random_state=7
)
lasso_cv.fit(train_features_scaled, train_scores)

# --- Evaluate both on the held-out test set ---
ridge_test_rmse = mean_squared_error(
    test_scores, ridge_cv.predict(test_features_scaled), squared=False
)
lasso_test_rmse = mean_squared_error(
    test_scores, lasso_cv.predict(test_features_scaled), squared=False
)

lasso_active_features = np.sum(np.abs(lasso_cv.coef_) > 0.001)

print("=== Cross-Validated Lambda Selection Results ===")
print(f"Ridge — best lambda : {ridge_cv.alpha_:.4f}")
print(f"Ridge — test RMSE   : {ridge_test_rmse:.3f}")
print()
print(f"Lasso — best lambda : {lasso_cv.alpha_:.4f}")
print(f"Lasso — test RMSE   : {lasso_test_rmse:.3f}")
print(f"Lasso — features kept (non-zero): {lasso_active_features} / 30")
print()
print("=== Interpretation ===")
better = 'Ridge' if ridge_test_rmse < lasso_test_rmse else 'Lasso'
print(f"Best performing model on unseen data: {better}")
print("Note: Lasso's sparsity makes it more interpretable even if RMSE is slightly higher.")
▶ Output
=== Cross-Validated Lambda Selection Results ===
Ridge — best lambda : 12.6486
Ridge — test RMSE : 39.847

Lasso — best lambda : 0.2154
Lasso — test RMSE : 40.213
Lasso — features kept (non-zero): 11 / 30

=== Interpretation ===
Best performing model on unseen data: Ridge
Note: Lasso's sparsity makes it more interpretable even if RMSE is slightly higher.
⚠️
Watch Out: Data Leakage with ScalersAlways fit your StandardScaler on training data only, then call .transform() (not .fit_transform()) on your test data. If you scale the entire dataset before splitting, test data statistics leak into your scaler — your validation scores will look artificially optimistic and you'll ship a worse model than you think you have.
AspectL1 Regularisation (Lasso)L2 Regularisation (Ridge)
Penalty formulaλ × Σ|wᵢ| (sum of absolutes)λ × Σwᵢ² (sum of squares)
Effect on weightsDrives many weights to exactly 0Shrinks all weights, rarely to exact 0
Feature selectionYes — built-in sparse solutionsNo — keeps all features active
Best used whenMany irrelevant / noisy featuresMost features carry real signal
Behaviour with correlated featuresPicks one, ignores the othersShares weight evenly across group
Computational costSlightly higher (non-differentiable at 0)Very efficient (closed-form solution)
sklearn classLasso(alpha=λ)Ridge(alpha=λ)
Geometry of constraint regionDiamond (L1 ball) — corners touch axesCircle (L2 ball) — smooth, no corners
Real-world exampleGene selection in genomicsPredicting house prices with many features

🎯 Key Takeaways

  • Regularisation adds a penalty term to the loss function that punishes large weights — this forces the model to learn general patterns rather than memorising training noise. It's not a trick; it's a direct mathematical constraint on model complexity.
  • L1 (Lasso) uses absolute weight penalties which create exact zeros — it does feature selection automatically. L2 (Ridge) uses squared penalties which shrink weights smoothly but keep all features active. The geometry of these two penalties is fundamentally different, not just numerically.
  • Lambda (α in sklearn) controls the regularisation strength and must be tuned via cross-validation. A log-scale search space (0.001 → 10000) gives much better coverage than a linear grid. RidgeCV and LassoCV make this a single method call.
  • Always scale your features before applying regularisation — otherwise the penalty disproportionately affects features with large numerical ranges, and your model will silently under-use important high-scale features.

⚠ Common Mistakes to Avoid

  • Mistake 1: Not scaling features before regularisation — Symptom: features with large numerical ranges (e.g. income in dollars vs age in years) get unfairly penalised because their weights are naturally smaller, not because they're less important. The model silently ignores high-scale features. Fix: always apply StandardScaler() or MinMaxScaler() to your features before fitting any regularised model, and fit the scaler only on training data.
  • Mistake 2: Treating regularisation as a substitute for proper data cleaning — Symptom: you add Ridge and validation scores improve, so you assume the job is done. But if your dataset has duplicated rows, target leakage, or extreme outliers, regularisation is papering over a deeper problem. Fix: always do exploratory data analysis and check for leakage first. Regularisation should be the last line of defence against overfitting, not the first.
  • Mistake 3: Using a fixed lambda value without cross-validation — Symptom: you pick alpha=1.0 because it's the default, and your model is either still overfitting or has been regularised into underfitting. The default value is almost never optimal for your specific dataset. Fix: always use RidgeCV or LassoCV (or GridSearchCV with a log-scale alpha grid) to find the best lambda for your data. Five minutes of cross-validation prevents hours of debugging later.

Interview Questions on This Topic

  • QCan you explain the geometric intuition behind why L1 regularisation tends to produce sparse weights while L2 doesn't? Walk me through what happens at the constraint boundary.
  • QIf you have a dataset with 500 features and suspect only 20 are genuinely predictive, which regularisation method would you start with and why? What would you do after identifying those features?
  • QWhat's the difference between regularisation and simply reducing model complexity — for example, using a shallower decision tree? When would you choose regularisation over simplifying the model architecture?

Frequently Asked Questions

What is the difference between L1 and L2 regularisation?

L1 (Lasso) adds a penalty proportional to the absolute value of weights — this creates exact zeros and performs automatic feature selection. L2 (Ridge) adds a penalty proportional to the square of weights — this shrinks all weights evenly toward zero but almost never to exactly zero. Use L1 when you want sparsity; use L2 when most features are genuinely relevant.

Does regularisation always improve model performance?

Not always — it depends on the problem. If your model is already underfitting (training error is high), adding regularisation will make things worse by constraining the model further. Regularisation is specifically a remedy for overfitting: when training error is much lower than validation error. Always diagnose the bias-variance situation first.

Why do we need to scale features before applying regularisation?

Regularisation penalises the magnitude of weights directly. If Feature A is measured in millions (e.g. salary) its learned weight will naturally be small, while Feature B in single digits (e.g. years of experience) will have a large weight. The penalty unfairly targets Feature B even if both are equally informative. Scaling to zero mean and unit variance puts all features on equal footing before the penalty is applied.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousA/B Testing in MLNext →Hyperparameter Tuning
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged