34 research outputs found
Unsupervised Monocular Depth Estimation with Left-Right Consistency
Learning based methods have shown very promising results for the task of
depth estimation in single images. However, most existing approaches treat
depth prediction as a supervised regression problem and as a result, require
vast quantities of corresponding ground truth depth data for training. Just
recording quality depth data in a range of environments is a challenging
problem. In this paper, we innovate beyond existing approaches, replacing the
use of explicit depth data during training with easier-to-obtain binocular
stereo footage.
We propose a novel training objective that enables our convolutional neural
network to learn to perform single image depth estimation, despite the absence
of ground truth depth data. Exploiting epipolar geometry constraints, we
generate disparity images by training our network with an image reconstruction
loss. We show that solving for image reconstruction alone results in poor
quality depth images. To overcome this problem, we propose a novel training
loss that enforces consistency between the disparities produced relative to
both the left and right images, leading to improved performance and robustness
compared to existing approaches. Our method produces state of the art results
for monocular depth estimation on the KITTI driving dataset, even outperforming
supervised methods that have been trained with ground truth depth.Comment: CVPR 2017 ora
Becoming the Expert - Interactive Multi-Class Machine Teaching
Compared to machines, humans are extremely good at classifying images into
categories, especially when they possess prior knowledge of the categories at
hand. If this prior information is not available, supervision in the form of
teaching images is required. To learn categories more quickly, people should
see important and representative images first, followed by less important
images later - or not at all. However, image-importance is individual-specific,
i.e. a teaching image is important to a student if it changes their overall
ability to discriminate between classes. Further, students keep learning, so
while image-importance depends on their current knowledge, it also varies with
time.
In this work we propose an Interactive Machine Teaching algorithm that
enables a computer to teach challenging visual concepts to a human. Our
adaptive algorithm chooses, online, which labeled images from a teaching set
should be shown to the student as they learn. We show that a teaching strategy
that probabilistically models the student's ability and progress, based on
their correct and incorrect answers, produces better 'experts'. We present
results using real human participants across several varied and challenging
real-world datasets.Comment: CVPR 201
Interpretable Transformations with Encoder-Decoder Networks
Deep feature spaces have the capacity to encode complex transformations of
their input data. However, understanding the relative feature-space
relationship between two transformed encoded images is difficult. For instance,
what is the relative feature space relationship between two rotated images?
What is decoded when we interpolate in feature space? Ideally, we want to
disentangle confounding factors, such as pose, appearance, and illumination,
from object identity. Disentangling these is difficult because they interact in
very nonlinear ways. We propose a simple method to construct a deep feature
space, with explicitly disentangled representations of several known
transformations. A person or algorithm can then manipulate the disentangled
representation, for example, to re-render an image with explicit control over
parameterized degrees of freedom. The feature space is constructed using a
transforming encoder-decoder network with a custom feature transform layer,
acting on the hidden representations. We demonstrate the advantages of explicit
disentangling on a variety of datasets and transformations, and as an aid for
traditional tasks, such as classification.Comment: Accepted at ICCV 201
Hierarchical Subquery Evaluation for Active Learning on a Graph
To train good supervised and semi-supervised object classifiers, it is
critical that we not waste the time of the human experts who are providing the
training labels. Existing active learning strategies can have uneven
performance, being efficient on some datasets but wasteful on others, or
inconsistent just between runs on the same dataset. We propose perplexity based
graph construction and a new hierarchical subquery evaluation algorithm to
combat this variability, and to release the potential of Expected Error
Reduction.
Under some specific circumstances, Expected Error Reduction has been one of
the strongest-performing informativeness criteria for active learning. Until
now, it has also been prohibitively costly to compute for sizeable datasets. We
demonstrate our highly practical algorithm, comparing it to other active
learning measures on classification datasets that vary in sparsity,
dimensionality, and size. Our algorithm is consistent over multiple runs and
achieves high accuracy, while querying the human expert for labels at a
frequency that matches their desired time budget.Comment: CVPR 201
Footprints and Free Space from a Single Color Image
Understanding the shape of a scene from a single color image is a formidable
computer vision task. However, most methods aim to predict the geometry of
surfaces that are visible to the camera, which is of limited use when planning
paths for robots or augmented reality agents. Such agents can only move when
grounded on a traversable surface, which we define as the set of classes which
humans can also walk over, such as grass, footpaths and pavement. Models which
predict beyond the line of sight often parameterize the scene with voxels or
meshes, which can be expensive to use in machine learning frameworks.
We introduce a model to predict the geometry of both visible and occluded
traversable surfaces, given a single RGB image as input. We learn from stereo
video sequences, using camera poses, per-frame depth and semantic segmentation
to form training data, which is used to supervise an image-to-image network. We
train models from the KITTI driving dataset, the indoor Matterport dataset, and
from our own casually captured stereo footage. We find that a surprisingly low
bar for spatial coverage of training scenes is required. We validate our
algorithm against a range of strong baselines, and include an assessment of our
predictions for a path-planning task.Comment: Accepted to CVPR 2020 as an oral presentatio
Self-Supervised Monocular Depth Hints
Monocular depth estimators can be trained with various forms of
self-supervision from binocular-stereo data to circumvent the need for
high-quality laser scans or other ground-truth data. The disadvantage, however,
is that the photometric reprojection losses used with self-supervised learning
typically have multiple local minima. These plausible-looking alternatives to
ground truth can restrict what a regression network learns, causing it to
predict depth maps of limited quality. As one prominent example, depth
discontinuities around thin structures are often incorrectly estimated by
current state-of-the-art methods.
Here, we study the problem of ambiguous reprojections in depth prediction
from stereo-based self-supervision, and introduce Depth Hints to alleviate
their effects. Depth Hints are complementary depth suggestions obtained from
simple off-the-shelf stereo algorithms. These hints enhance an existing
photometric loss function, and are used to guide a network to learn better
weights. They require no additional data, and are assumed to be right only
sometimes. We show that using our Depth Hints gives a substantial boost when
training several leading self-supervised-from-stereo models, not just our own.
Further, combined with other good practices, we produce state-of-the-art depth
predictions on the KITTI benchmark.Comment: Accepted to ICCV 201