4 research outputs found
Visualizing and Understanding Contrastive Learning
Contrastive learning has revolutionized the field of computer vision,
learning rich representations from unlabeled data, which generalize well to
diverse vision tasks. Consequently, it has become increasingly important to
explain these approaches and understand their inner workings mechanisms. Given
that contrastive models are trained with interdependent and interacting inputs
and aim to learn invariance through data augmentation, the existing methods for
explaining single-image systems (e.g., image classification models) are
inadequate as they fail to account for these factors. Additionally, there is a
lack of evaluation metrics designed to assess pairs of explanations, and no
analytical studies have been conducted to investigate the effectiveness of
different techniques used to explaining contrastive learning. In this work, we
design visual explanation methods that contribute towards understanding
similarity learning tasks from pairs of images. We further adapt existing
metrics, used to evaluate visual explanations of image classification systems,
to suit pairs of explanations and evaluate our proposed methods with these
metrics. Finally, we present a thorough analysis of visual explainability
methods for contrastive learning, establish their correlation with downstream
tasks and demonstrate the potential of our approaches to investigate their
merits and drawbacks