Generalization Error in Machine Learning (2026 Guide)

Updated on February 01, 2026 6 minutes read

Modern workspace with a laptop displaying bias–variance trade-off and training vs validation error charts, illustrating generalization error in machine learning models.

Frequently Asked Questions

What is generalization error in machine learning?

Generalization error is how much a model’s performance drops when you move from training data to new, unseen data. It’s often discussed as test error or the generalization gap.

How do I know if my model is overfitting or underfitting?

Compare training and validation results. Underfitting usually shows high error on both; overfitting often shows low training error but much higher validation/test error.

Does the bias–variance decomposition always apply?

The clean noise + bias² + variance decomposition is exact for squared-error regression under standard assumptions. For classification, the intuition still helps, but the math differs.

What’s a quick way to reduce variance without changing the dataset?

Try regularization (like L2/weight decay), early stopping, or a simpler model. Cross-validation can also help you pick settings that generalize better.

Career Services

Personalized career support to help you launch your tech career. Get résumé reviews, mock interviews, and industry insights—so you can showcase your new skills with confidence.