611 research outputs found
Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation
Dense depth estimation is essential to scene-understanding for autonomous
driving. However, recent self-supervised approaches on monocular videos suffer
from scale-inconsistency across long sequences. Utilizing data from the
ubiquitously copresent global positioning systems (GPS), we tackle this
challenge by proposing a dynamically-weighted GPS-to-Scale (g2s) loss to
complement the appearance-based losses. We emphasize that the GPS is needed
only during the multimodal training, and not at inference. The relative
distance between frames captured through the GPS provides a scale signal that
is independent of the camera setup and scene distribution, resulting in richer
learned feature representations. Through extensive evaluation on multiple
datasets, we demonstrate scale-consistent and -aware depth estimation during
inference, improving the performance even when training with low-frequency GPS
data.Comment: Accepted at 2021 IEEE International Conference on Robotics and
Automation (ICRA
A Study on the Generality of Neural Network Structures for Monocular Depth Estimation
Monocular depth estimation has been widely studied, and significant
improvements in performance have been recently reported. However, most previous
works are evaluated on a few benchmark datasets, such as KITTI datasets, and
none of the works provide an in-depth analysis of the generalization
performance of monocular depth estimation. In this paper, we deeply investigate
the various backbone networks (e.g.CNN and Transformer models) toward the
generalization of monocular depth estimation. First, we evaluate
state-of-the-art models on both in-distribution and out-of-distribution
datasets, which have never been seen during network training. Then, we
investigate the internal properties of the representations from the
intermediate layers of CNN-/Transformer-based models using synthetic
texture-shifted datasets. Through extensive experiments, we observe that the
Transformers exhibit a strong shape-bias rather than CNNs, which have a strong
texture-bias. We also discover that texture-biased models exhibit worse
generalization performance for monocular depth estimation than shape-biased
models. We demonstrate that similar aspects are observed in real-world driving
datasets captured under diverse environments. Lastly, we conduct a dense
ablation study with various backbone networks which are utilized in modern
strategies. The experiments demonstrate that the intrinsic locality of the CNNs
and the self-attention of the Transformers induce texture-bias and shape-bias,
respectively.Comment: Accepted in TPAM
A Simple Baseline for Supervised Surround-view Depth Estimation
Depth estimation has been widely studied and serves as the fundamental step
of 3D perception for autonomous driving. Though significant progress has been
made for monocular depth estimation in the past decades, these attempts are
mainly conducted on the KITTI benchmark with only front-view cameras, which
ignores the correlations across surround-view cameras. In this paper, we
propose S3Depth, a Simple Baseline for Supervised Surround-view Depth
Estimation, to jointly predict the depth maps across multiple surrounding
cameras. Specifically, we employ a global-to-local feature extraction module
which combines CNN with transformer layers for enriched representations.
Further, the Adjacent-view Attention mechanism is proposed to enable the
intra-view and inter-view feature propagation. The former is achieved by the
self-attention module within each view, while the latter is realized by the
adjacent attention module, which computes the attention across multi-cameras to
exchange the multi-scale representations across surround-view feature maps.
Extensive experiments show that our method achieves superior performance over
existing state-of-the-art methods on both DDAD and nuScenes datasets
On the Synergies between Machine Learning and Binocular Stereo for Depth Estimation from Images: a Survey
Stereo matching is one of the longest-standing problems in computer vision
with close to 40 years of studies and research. Throughout the years the
paradigm has shifted from local, pixel-level decision to various forms of
discrete and continuous optimization to data-driven, learning-based methods.
Recently, the rise of machine learning and the rapid proliferation of deep
learning enhanced stereo matching with new exciting trends and applications
unthinkable until a few years ago. Interestingly, the relationship between
these two worlds is two-way. While machine, and especially deep, learning
advanced the state-of-the-art in stereo matching, stereo itself enabled new
ground-breaking methodologies such as self-supervised monocular depth
estimation based on deep networks. In this paper, we review recent research in
the field of learning-based depth estimation from single and binocular images
highlighting the synergies, the successes achieved so far and the open
challenges the community is going to face in the immediate future.Comment: Accepted to TPAMI. Paper version of our CVPR 2019 tutorial:
"Learning-based depth estimation from stereo and monocular images: successes,
limitations and future challenges"
(https://sites.google.com/view/cvpr-2019-depth-from-image/home
Recommended from our members
Understanding the Dynamic Visual World: From Motion to Semantics
We live in a dynamic world, which is continuously in motion. Perceiving and interpreting the dynamic surroundings is an essential capability for an intelligent agent. Human beings have the remarkable capability to learn from limited data, with partial or little annotation, in sharp contrast to computational perception models that rely on large-scale, manually labeled data. Reliance on strongly supervised models with manually labeled data inherently prohibits us from modeling the dynamic visual world, as manual annotations are tedious, expensive, and not scalable, especially if we would like to solve multiple scene understanding tasks at the same time. Even worse, in some cases, manual annotations are completely infeasible, such as the motion vector of each pixel (i.e., optical flow) since humans cannot reliably produce these types of labeling. In fact, living in a dynamic world, when we move around, motion information, as a result of moving camera, independently moving objects, and scene geometry, consists of abundant information, revealing the structure and complexity of our dynamic visual world. As the famous psychologist James J. Gibson suggested, “we must perceive in order to move, but we also must move in order to perceive”. In this thesis, we investigate how to use the motion information contained in unlabeled or partially labeled videos to better understand and synthesize the dynamic visual world.
This thesis consists of three parts. In the first part, we focus on the “move to perceive” aspect. When moving through the world, it is natural for an intelligent agent to associate image patterns with the magnitude of their displacement over time: as the agent moves, far away mountains don’t move much; nearby trees move a lot. This natural relationship between the appearance of objects and their apparent motion is a rich source of information about the relationship between the distance of objects and their appearance in images. We present a pretext task of estimating the relative depth of elements of a scene (i.e., ordering the pixels in an image according to distance from the viewer) recovered from motion field of unlabeled videos. The goal of this pretext task was to induce useful feature representations in deep Convolutional Neural Networks (CNNs). These induced representations, using 1.1 million video frames crawled from YouTube within one hour without any manual labeling, provide valuable starting features for the training of neural networks for downstream tasks. It is promising to match or even surpass what ImageNet pre-training gives us today, which needs a huge amount of manual labeling, on tasks such as semantic image segmentation as all of our training data comes almost for free.
In the second part, we study the “perceive to move” aspect. As we humans look around, we do not solve a single vision task at a time. Instead, we perceive our surroundings in a holistic manner, doing visual understanding using all visual cues jointly. By simultaneously solving multiple tasks together, one task can influence another. In specific, we propose a neural network architecture, called SENSE, which shares common feature representations among four closely-related tasks: optical flow estimation, disparity estimation from stereo, occlusion detection, and semantic segmentation. The key insight is that sharing features makes the network more compact and induces better feature representations. For real-world data, however, not all an- notations of the four tasks mentioned above are always available at the same time. To this end, loss functions are designed to exploit interactions of different tasks and do not need manual annotations, to better handle partially labeled data in a semi- supervised manner, leading to superior understanding performance of the dynamic visual world.
Understanding the motion contained in a video enables us to perceive the dynamic visual world in a novel manner. In the third part, we present an approach, called SuperSloMo, which synthesizes slow-motion videos from a standard frame-rate video. Converting a plain video into a slow-motion version enables us to see memorable moments in our life that are hard to see clearly otherwise with naked eyes: a difficult skateboard trick, a dog catching a ball, etc. Such a technique also has wide applications such as generating smooth view transition on a head-mounted virtual reality (VR) devices, compressing videos, synthesizing videos with motion blur, etc
- …