3,825 research outputs found
Prototypical Contrastive Learning of Unsupervised Representations
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised
representation learning method that addresses the fundamental limitations of
instance-wise contrastive learning. PCL not only learns low-level features for
the task of instance discrimination, but more importantly, it implicitly
encodes semantic structures of the data into the learned embedding space.
Specifically, we introduce prototypes as latent variables to help find the
maximum-likelihood estimation of the network parameters in an
Expectation-Maximization framework. We iteratively perform E-step as finding
the distribution of prototypes via clustering and M-step as optimizing the
network via contrastive learning. We propose ProtoNCE loss, a generalized
version of the InfoNCE loss for contrastive learning, which encourages
representations to be closer to their assigned prototypes. PCL outperforms
state-of-the-art instance-wise contrastive learning methods on multiple
benchmarks with substantial improvement in low-resource transfer learning. Code
and pretrained models are available at https://github.com/salesforce/PCL
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Self-supervised Learning for Electroencephalogram: A Systematic Survey
Electroencephalogram (EEG) is a non-invasive technique to record
bioelectrical signals. Integrating supervised deep learning techniques with EEG
signals has recently facilitated automatic analysis across diverse EEG-based
tasks. However, the label issues of EEG signals have constrained the
development of EEG-based deep models. Obtaining EEG annotations is difficult
that requires domain experts to guide collection and labeling, and the
variability of EEG signals among different subjects causes significant label
shifts. To solve the above challenges, self-supervised learning (SSL) has been
proposed to extract representations from unlabeled samples through
well-designed pretext tasks. This paper concentrates on integrating SSL
frameworks with temporal EEG signals to achieve efficient representation and
proposes a systematic review of the SSL for EEG signals. In this paper, 1) we
introduce the concept and theory of self-supervised learning and typical SSL
frameworks. 2) We provide a comprehensive review of SSL for EEG analysis,
including taxonomy, methodology, and technique details of the existing
EEG-based SSL frameworks, and discuss the difference between these methods. 3)
We investigate the adaptation of the SSL approach to various downstream tasks,
including the task description and related benchmark datasets. 4) Finally, we
discuss the potential directions for future SSL-EEG research.Comment: 35 pages, 12 figure
- …