While self-supervised learning techniques are often used to mining implicit
knowledge from unlabeled data via modeling multiple views, it is unclear how to
perform effective representation learning in a complex and inconsistent
context. To this end, we propose a methodology, specifically consistency and
complementarity network (CoCoNet), which avails of strict global inter-view
consistency and local cross-view complementarity preserving regularization to
comprehensively learn representations from multiple views. On the global stage,
we reckon that the crucial knowledge is implicitly shared among views, and
enhancing the encoder to capture such knowledge from data can improve the
discriminability of the learned representations. Hence, preserving the global
consistency of multiple views ensures the acquisition of common knowledge.
CoCoNet aligns the probabilistic distribution of views by utilizing an
efficient discrepancy metric measurement based on the generalized sliced
Wasserstein distance. Lastly on the local stage, we propose a heuristic
complementarity-factor, which joints cross-view discriminative knowledge, and
it guides the encoders to learn not only view-wise discriminability but also
cross-view complementary information. Theoretically, we provide the
information-theoretical-based analyses of our proposed CoCoNet. Empirically, to
investigate the improvement gains of our approach, we conduct adequate
experimental validations, which demonstrate that CoCoNet outperforms the
state-of-the-art self-supervised methods by a significant margin proves that
such implicit consistency and complementarity preserving regularization can
enhance the discriminability of latent representations.Comment: Accepted by IEEE Transactions on Knowledge and Data Engineering
(TKDE) 2022; Refer to https://ieeexplore.ieee.org/document/985763