31 research outputs found

    High-Accuracy Facial Depth Models derived from 3D Synthetic Data

    Get PDF
    In this paper, we explore how synthetically generated 3D face models can be used to construct a high accuracy ground truth for depth. This allows us to train the Convolutional Neural Networks (CNN) to solve facial depth estimation problems. These models provide sophisticated controls over image variations including pose, illumination, facial expressions and camera position. 2D training samples can be rendered from these models, typically in RGB format, together with depth information. Using synthetic facial animations, a dynamic facial expression or facial action data can be rendered for a sequence of image frames together with ground truth depth and additional metadata such as head pose, light direction, etc. The synthetic data is used to train a CNN based facial depth estimation system which is validated on both synthetic and real images. Potential fields of application include 3D reconstruction, driver monitoring systems, robotic vision systems, and advanced scene understanding

    MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS

    Get PDF
    Abstract. Despite the recent success of learning-based monocular depth estimation algorithms and the release of large-scale datasets for training, the methods are limited to depth map prediction and still struggle to yield reliable results in the 3D space without additional scene cues. Indeed, although state-of-the-art approaches produce quality depth maps, they generally fail to recover the 3D structure of the scene robustly. This work explores supervised CNN architectures for monocular depth estimation and evaluates their potential in 3D reconstruction. Since most available datasets for training are not designed toward this goal and are limited to specific indoor scenarios, a new metric, large-scale synthetic benchmark (ArchDepth) is introduced that renders near real-world scenarios of outdoor scenes. A encoder-decoder architecture is used for training, and the generalization of the approach is evaluated via depth inference in unseen views in synthetic and real-world scenarios. The depth map predictions are also projected in the 3D space using a separate module. Results are qualitatively and quantitatively evaluated and compared with state-of-the-art algorithms for single image 3D scene recovery

    Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation

    Full text link
    Dense depth estimation is essential to scene-understanding for autonomous driving. However, recent self-supervised approaches on monocular videos suffer from scale-inconsistency across long sequences. Utilizing data from the ubiquitously copresent global positioning systems (GPS), we tackle this challenge by proposing a dynamically-weighted GPS-to-Scale (g2s) loss to complement the appearance-based losses. We emphasize that the GPS is needed only during the multimodal training, and not at inference. The relative distance between frames captured through the GPS provides a scale signal that is independent of the camera setup and scene distribution, resulting in richer learned feature representations. Through extensive evaluation on multiple datasets, we demonstrate scale-consistent and -aware depth estimation during inference, improving the performance even when training with low-frequency GPS data.Comment: Accepted at 2021 IEEE International Conference on Robotics and Automation (ICRA

    BENCHMARKING THE EXTRACTION OF 3D GEOMETRY FROM UAV IMAGES WITH DEEP LEARNING METHODS

    Get PDF
    3D reconstruction from single and multi-view stereo images is still an open research topic, despite the high number of solutions proposed in the last decades. The surge of deep learning methods has then stimulated the development of new methods using monocular (MDE, Monocular Depth Estimation), stereoscopic and Multi-View Stereo (MVS) 3D reconstruction, showing promising results, often comparable to or even better than traditional methods. The more recent development of NeRF (Neural Radial Fields) has further triggered the interest for this kind of solution. Most of the proposed approaches, however, focus on terrestrial applications (e.g., autonomous driving or small artefacts 3D reconstructions), while airborne and UAV acquisitions are often overlooked. The recent introduction of new datasets, such as UseGeo has, therefore, given the opportunity to assess how state-of-the-art MDE, MVS and NeRF 3D reconstruction algorithms perform using airborne UAV images, allowing their comparison with LiDAR ground truth. This paper aims to present the results achieved by two MDE, two MVS and two NeRF approaches levering deep learning approaches, trained and tested using the UseGeo dataset. This work allows the comparison with a ground truth showing the current state of the art of these solutions and providing useful indications for their future development and improvement
    corecore