Skip to content

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

Authors: Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, Stephane Deny

Published: 2021 (Conference Paper)

Source: International Conference on Machine Learning

Algorithm: Barlow Twins

arXiv: 2103.03230

Summary

Barlow Twins proposes a simple self-supervised objective that aligns representations of two augmented views while pushing the batch cross-correlation matrix toward the identity, making invariance and redundancy reduction explicit in one loss. Its significance is that it avoids collapse without negative samples, stop-gradient asymmetry, a momentum encoder, or unusually large batches, while highlighting high-dimensional embeddings as a useful regime for non-contrastive SSL.

Abstract

Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large computer vision benchmarks. A successful approach to SSL is to learn embeddings which are invariant to distortions of the input sample. However, a recurring issue with this approach is the existence of trivial constant solutions. Most current methods avoid such solutions by careful implementation details. We propose an objective function that naturally avoids collapse by measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible. This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors. The method is called Barlow Twins, owing to neuroscientist H. Barlow's redundancy-reduction principle applied to a pair of identical networks. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors. Barlow Twins outperforms previous methods on ImageNet for semi-supervised classification in the low-data regime, and is on par with current state of the art for ImageNet classification with a linear classifier head, and for transfer tasks of classification and object detection.

Tags

  • Self-supervised learning

  • Representation learning

  • Computer vision

  • Redundancy reduction

  • Cross-correlation loss

  • Collapse prevention

  • Non-contrastive learning

  • ImageNet

  • Transfer learning