33,877 research outputs found
GANVO: Unsupervised Deep Monocular Visual Odometry and Depth Estimation with Generative Adversarial Networks
In the last decade, supervised deep learning approaches have been extensively
employed in visual odometry (VO) applications, which is not feasible in
environments where labelled data is not abundant. On the other hand,
unsupervised deep learning approaches for localization and mapping in unknown
environments from unlabelled data have received comparatively less attention in
VO research. In this study, we propose a generative unsupervised learning
framework that predicts 6-DoF pose camera motion and monocular depth map of the
scene from unlabelled RGB image sequences, using deep convolutional Generative
Adversarial Networks (GANs). We create a supervisory signal by warping view
sequences and assigning the re-projection minimization to the objective loss
function that is adopted in multi-view pose estimation and single-view depth
generation network. Detailed quantitative and qualitative evaluations of the
proposed framework on the KITTI and Cityscapes datasets show that the proposed
method outperforms both existing traditional and unsupervised deep VO methods
providing better results for both pose estimation and depth recovery.Comment: ICRA 2019 - accepte
BodyNet: Volumetric Inference of 3D Human Body Shapes
Human shape estimation is an important task for video editing, animation and
fashion industry. Predicting 3D human body shape from natural images, however,
is highly challenging due to factors such as variation in human bodies,
clothing and viewpoint. Prior methods addressing this problem typically attempt
to fit parametric body models with certain priors on pose and shape. In this
work we argue for an alternative representation and propose BodyNet, a neural
network for direct inference of volumetric body shape from a single image.
BodyNet is an end-to-end trainable network that benefits from (i) a volumetric
3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate
supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them
results in performance improvement as demonstrated by our experiments. To
evaluate the method, we fit the SMPL model to our network output and show
state-of-the-art results on the SURREAL and Unite the People datasets,
outperforming recent approaches. Besides achieving state-of-the-art
performance, our method also enables volumetric body-part segmentation.Comment: Appears in: European Conference on Computer Vision 2018 (ECCV 2018).
27 page
Learning to Reconstruct People in Clothing from a Single RGB Camera
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach
- …