20 research outputs found

    AQUALOC: An Underwater Dataset for Visual-Inertial-Pressure Localization

    Get PDF
    We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archaeological site at a depth of 270 meters and a second site at a depth of 380 meters. The data acquisition is performed using Remotely Operated Vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor and a computing unit, all embedded in a single enclosure. The sensors' measurements are recorded synchronously on the computing unit and seventeen sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a Structure-from-Motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/Comment: The International Journal of Robotics Research, SAGE Publications, 201

    Real-time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments

    Full text link
    In the context of robotic underwater operations, the visual degradations induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, most localization methods are based on expensive navigational sensors associated with acoustic positioning. On the other hand, visual odometry and visual SLAM have been exhaustively studied for aerial or terrestrial applications, but state-of-the-art algorithms fail underwater. In this paper we tackle the problem of using a simple low-cost camera for underwater localization and propose a new monocular visual odometry method dedicated to the underwater environment. We evaluate different tracking methods and show that optical flow based tracking is more suited to underwater images than classical approaches based on descriptors. We also propose a keyframe-based visual odometry approach highly relying on nonlinear optimization. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles (ROVs) used for underwater archaeological missions but the developed system can be used in any other applications as long as visual information is available

    Apprentissage multi-tâche de l'élévation et de la sémantique à partir d'images aériennes

    Get PDF
    International audienceAerial or satellite imagery is a great source for land surface analysis, which might yield land use maps or elevation models. In this investigation, we present a neural network framework for learning semantics and local height together. We show how this joint multi-task learning benefits to each task on the large dataset of the 2018 Data Fusion Contest. Moreover, our framework also yields an uncertainty map which allows assessing the prediction of the model. Code is available at https://github.com/marcelampc/mtl_aerial_images

    In Honor of Fred Gray: The Meaning of Montgomery

    Get PDF
    International audienceWe present a new method for depth estimation, based on a stereoscopic camera with various camera focus setting. Depth isestimated using a criterion derived from a maximum likelihood estimator, which jointly analyses the data likelihood with respect to the disparity and the defocus blur of each camera. Benefit of this approach is studied, in particular for scene having repetitive patterns with respect to classical stereoscopy, then we present experimental results on outdoor scenes from a real infra-red stereoscopic system.Nous présentons une nouvelle méthode d’estimation de profondeur, qui repose sur l’utilisation de deux caméras en configuration stéréoscopique avec chacune une mise au point différente. La profondeur est estimée via un critère dérivé d’un maximum de vraisemblance qui intègre conjointement des informations de disparité et les flous de défocalisation de chaque caméra. Nous étudions les apports de cette approche, notamment sur des scènes à motifs répétitifs, par rapport à la stéréoscopie classique sur des images issues de caméras opérant dans le visible.Enfin nous montrons des exemples de résultats sur des images IR thermique acquises en extérieur

    Technical Report: Co-learning of geometry and semantics for online 3D mapping

    Get PDF
    This paper is a technical report about our submission for the ECCV 2018 3DRMS Workshop Challenge on Semantic 3D Reconstruction \cite{Tylecek2018rms}. In this paper, we address 3D semantic reconstruction for autonomous navigation using co-learning of depth map and semantic segmentation. The core of our pipeline is a deep multi-task neural network which tightly refines depth and also produces accurate semantic segmentation maps. Its inputs are an image and a raw depth map produced from a pair of images by standard stereo vision. The resulting semantic 3D point clouds are then merged in order to create a consistent 3D mesh, in turn used to produce dense semantic 3D reconstruction maps. The performances of each step of the proposed method are evaluated on the dataset and multiple tasks of the 3DRMS Challenge, and repeatedly surpass state-of-the-art approaches

    Réseaux de neurones profonds pour estimer la profondeur grâce au flou de défocalisation

    No full text
    International audienceWe propose a new monocular and passive 3D camera based on the combination of an unconventional lens dedicated to Depth from Defocus (DFD) and a new DFD algorithm based on a deep network learning. DFD is a depth estimation technique that uses the relation between defocus blur and depth. To avoid depth ambiguity and dead zone of DFD, the proposed camera has a uncorrected longitudinal chromatic aberration and thus captures in a single snapshot a RGB image with different defocus blurs. The deep neural network directly learns a function between defocus blur and depth, which avoids a time consuming camera defocus blur calibration. We assess our approach by showing experimental depth maps obtained with the proposed method on outdoor scenes.Nous proposons un nouveau concept de caméra 3D monovoie passive reposant sur l'association d'une optique non conventionnelle optimisée pour renforcer l'estimation de la profondeur à l'aide du flou de défocalisation (Depth From Defocus - DFD) avec un algorithme d'estimation de ce flou reposant sur un apprentissage par réseau de neurones. En particulier, pour éviter l'ambiguïté sur la profondeur et les zones aveugles du DFD classique, l'optique possède du chromatisme, ce qui induit un flou de défocalisation variable suivant les canaux rouge (R), vert (G) et bleu (B) de l'image couleur. L'approche par apprentissage par réseau de neurones profond apprend directement la fonction entre le flou et la profondeur, ce qui permet d'éviter une calibration du flou de la caméra pour chaque profondeur. Nous validons notre approche avec des exemples de cartes de profondeur obtenues avec notre algorithme sur des scènes extérieures

    Real-time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments

    No full text
    In the context of robotic underwater operations, the visual degradations induced by the medium properties make difficult the exclusive use of cameras for localization purpose. Hence, most localization methods are based on expensive navigational sensors associated with acoustic positioning. On the other hand, visual odometry and visual SLAM have been exhaustively studied for aerial or terrestrial applications, but state-of-the-art algorithms fail underwater. In this paper we tackle the problem of using a simple low-cost camera for underwater localization and propose a new monocular visual odometry method dedicated to the underwater environment. We evaluate different tracking methods and show that optical flow based tracking is more suited to underwater images than classical approaches based on descriptors. We also propose a keyframe-based visual odometry approach highly relying on nonlinear optimization. The proposed algorithm has been assessed on both simulated and real underwater datasets and outperforms state-of-the-art visual SLAM methods under many of the most challenging conditions. The main application of this work is the localization of Remotely Operated Vehicles (ROVs) used for underwater archaeological missions but the developed system can be used in any other applications as long as visual information is available
    corecore