Negative-free contrastive learning has attracted a lot of attention with
simplicity and impressive performance for large-scale pretraining. But its
disentanglement property remains unexplored. In this paper, we take different
negative-free contrastive learning methods to study the disentanglement
property of this genre of self-supervised methods empirically. We find the
existing disentanglement metrics fail to make meaningful measurements for the
high-dimensional representation model so we propose a new disentanglement
metric based on Mutual Information between representation and data factors.
With the proposed metric, we benchmark the disentanglement property of
negative-free contrastive learning for the first time, on both popular
synthetic datasets and a real-world dataset CelebA. Our study shows that the
investigated methods can learn a well-disentangled subset of representation. We
extend the study of the disentangled representation learning to
high-dimensional representation space and negative-free contrastive learning
for the first time. The implementation of the proposed metric is available at
\url{https://github.com/noahcao/disentanglement_lib_med}.Comment: Implementation available at
https://github.com/noahcao/disentanglement_lib_me