Contrastive Learning in Self-Supervised Learning (2026 Guide)
Updated on January 31, 2026 6 minutes read
Updated on January 31, 2026 6 minutes read
Contrastive learning is a self-supervised approach that learns embeddings by comparing examples. It pulls representations of two “views” of the same instance closer (positives) and pushes representations of different instances apart (negatives), helping the model learn transferable features without labels.
Classic contrastive learning relies on negatives, typically drawn from other samples in the batch or a memory queue. Some modern self-supervised methods avoid explicit negatives, but when you use a contrastive objective like InfoNCE or NT-Xent, negatives (explicit or implicit) are part of the training signal.
Both losses implement a similar idea: identify the correct positive match among many candidates using a cross-entropy-style objective. NT-Xent makes the temperature-scaling step explicit and is commonly referenced in SimCLR-style setups, while “InfoNCE” is a broader name used across contrastive methods for the same family of objectives.