Generalization Error in Machine Learning (2026 Guide)
Updated on February 01, 2026 6 minutes read
Generalization error is how much a model’s performance drops when you move from training data to new, unseen data. It’s often discussed as test error or the generalization gap.
Compare training and validation results. Underfitting usually shows high error on both; overfitting often shows low training error but much higher validation/test error.
The clean noise + bias² + variance decomposition is exact for squared-error regression under standard assumptions. For classification, the intuition still helps, but the math differs.
Try regularization (like L2/weight decay), early stopping, or a simpler model. Cross-validation can also help you pick settings that generalize better.