78 research outputs found
Prototypical Contrastive Learning of Unsupervised Representations
This paper presents Prototypical Contrastive Learning (PCL), an unsupervised
representation learning method that addresses the fundamental limitations of
instance-wise contrastive learning. PCL not only learns low-level features for
the task of instance discrimination, but more importantly, it implicitly
encodes semantic structures of the data into the learned embedding space.
Specifically, we introduce prototypes as latent variables to help find the
maximum-likelihood estimation of the network parameters in an
Expectation-Maximization framework. We iteratively perform E-step as finding
the distribution of prototypes via clustering and M-step as optimizing the
network via contrastive learning. We propose ProtoNCE loss, a generalized
version of the InfoNCE loss for contrastive learning, which encourages
representations to be closer to their assigned prototypes. PCL outperforms
state-of-the-art instance-wise contrastive learning methods on multiple
benchmarks with substantial improvement in low-resource transfer learning. Code
and pretrained models are available at https://github.com/salesforce/PCL
Divide-and-Rule: Self-Supervised Learning for Survival Analysis in Colorectal Cancer
With the long-term rapid increase in incidences of colorectal cancer (CRC),
there is an urgent clinical need to improve risk stratification. The
conventional pathology report is usually limited to only a few
histopathological features. However, most of the tumor microenvironments used
to describe patterns of aggressive tumor behavior are ignored. In this work, we
aim to learn histopathological patterns within cancerous tissue regions that
can be used to improve prognostic stratification for colorectal cancer. To do
so, we propose a self-supervised learning method that jointly learns a
representation of tissue regions as well as a metric of the clustering to
obtain their underlying patterns. These histopathological patterns are then
used to represent the interaction between complex tissues and predict clinical
outcomes directly. We furthermore show that the proposed approach can benefit
from linear predictors to avoid overfitting in patient outcomes predictions. To
this end, we introduce a new well-characterized clinicopathological dataset,
including a retrospective collective of 374 patients, with their survival time
and treatment information. Histomorphological clusters obtained by our method
are evaluated by training survival models. The experimental results demonstrate
statistically significant patient stratification, and our approach outperformed
the state-of-the-art deep clustering methods
Self-Supervised Classification Network
We present Self-Classifier -- a novel self-supervised end-to-end
classification learning approach. Self-Classifier learns labels and
representations simultaneously in a single-stage end-to-end manner by
optimizing for same-class prediction of two augmented views of the same sample.
To guarantee non-degenerate solutions (i.e., solutions where all labels are
assigned to the same class) we propose a mathematically motivated variant of
the cross-entropy loss that has a uniform prior asserted on the predicted
labels. In our theoretical analysis we prove that degenerate solutions are not
in the set of optimal solutions of our approach. Self-Classifier is simple to
implement and scalable. Unlike other popular unsupervised classification and
contrastive representation learning approaches, it does not require any form of
pre-training, expectation maximization, pseudo-labelling, external clustering,
a second network, stop-gradient operation or negative pairs. Despite its
simplicity, our approach sets a new state of the art for unsupervised
classification of ImageNet; and even achieves comparable to state-of-the-art
results for unsupervised representation learning. Code:
https://github.com/elad-amrani/self-classifierComment: Update method and add experiment
- …