12,660 research outputs found
CASSL: Curriculum Accelerated Self-Supervised Learning
Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects
Self-Supervised Learning for Spinal MRIs
A significant proportion of patients scanned in a clinical setting have
follow-up scans. We show in this work that such longitudinal scans alone can be
used as a form of 'free' self-supervision for training a deep network. We
demonstrate this self-supervised learning for the case of T2-weighted sagittal
lumbar Magnetic Resonance Images (MRIs). A Siamese convolutional neural network
(CNN) is trained using two losses: (i) a contrastive loss on whether the scan
is of the same person (i.e. longitudinal) or not, together with (ii) a
classification loss on predicting the level of vertebral bodies. The
performance of this pre-trained network is then assessed on a grading
classification task. We experiment on a dataset of 1016 subjects, 423
possessing follow-up scans, with the end goal of learning the disc degeneration
radiological gradings attached to the intervertebral discs. We show that the
performance of the pre-trained CNN on the supervised classification task is (i)
superior to that of a network trained from scratch; and (ii) requires far fewer
annotated training samples to reach an equivalent performance to that of the
network trained from scratch.Comment: 3rd Workshop on Deep Learning in Medical Image Analysi
Improvements to context based self-supervised learning
We develop a set of methods to improve on the results of self-supervised
learning using context. We start with a baseline of patch based arrangement
context learning and go from there. Our methods address some overt problems
such as chromatic aberration as well as other potential problems such as
spatial skew and mid-level feature neglect. We prevent problems with testing
generalization on common self-supervised benchmark tests by using different
datasets during our development. The results of our methods combined yield top
scores on all standard self-supervised benchmarks, including classification and
detection on PASCAL VOC 2007, segmentation on PASCAL VOC 2012, and "linear
tests" on the ImageNet and CSAIL Places datasets. We obtain an improvement over
our baseline method of between 4.0 to 7.1 percentage points on transfer
learning classification tests. We also show results on different standard
network architectures to demonstrate generalization as well as portability. All
data, models and programs are available at:
https://gdo-datasci.llnl.gov/selfsupervised/.Comment: Accepted paper at CVPR 201
Time-Contrastive Networks: Self-Supervised Learning from Video
We propose a self-supervised approach for learning representations and
robotic behaviors entirely from unlabeled videos recorded from multiple
viewpoints, and study how this representation can be used in two robotic
imitation settings: imitating object interactions from videos of humans, and
imitating human poses. Imitation of human behavior requires a
viewpoint-invariant representation that captures the relationships between
end-effectors (hands or robot grippers) and the environment, object attributes,
and body pose. We train our representations using a metric learning loss, where
multiple simultaneous viewpoints of the same observation are attracted in the
embedding space, while being repelled from temporal neighbors which are often
visually similar but functionally different. In other words, the model
simultaneously learns to recognize what is common between different-looking
images, and what is different between similar-looking images. This signal
causes our model to discover attributes that do not change across viewpoint,
but do change across time, while ignoring nuisance variables such as
occlusions, motion blur, lighting and background. We demonstrate that this
representation can be used by a robot to directly mimic human poses without an
explicit correspondence, and that it can be used as a reward function within a
reinforcement learning algorithm. While representations are learned from an
unlabeled collection of task-related videos, robot behaviors such as pouring
are learned by watching a single 3rd-person demonstration by a human. Reward
functions obtained by following the human demonstrations under the learned
representation enable efficient reinforcement learning that is practical for
real-world robotic systems. Video results, open-source code and dataset are
available at https://sermanet.github.io/imitat
- …