17 research outputs found

    DeepFactors: Real-time probabilistic dense monocular SLAM

    Get PDF
    The ability to estimate rich geometry and camera motion from monocular imagery is fundamental to future interactive robotics and augmented reality applications. Different approaches have been proposed that vary in scene geometry representation (sparse landmarks, dense maps), the consistency metric used for optimising the multi-view problem, and the use of learned priors. We present a SLAM system that unifies these methods in a probabilistic framework while still maintaining real-time performance. This is achieved through the use of a learned compact depth map representation and reformulating three different types of errors: photometric, reprojection and geometric, which we make use of within standard factor graph software. We evaluate our system on trajectory estimation and depth reconstruction on real-world sequences and present various examples of estimated dense geometry

    The concurrent decline of soil lead and children’s blood lead in New Orleans

    Full text link
    Lead (Pb) is extremely toxic and a major cause of chronic diseases worldwide. Pb is associated with health disparities, particularly within low-income populations. In biological systems, Pb mimics calcium and, among other effects, interrupts cell signaling. Furthermore, Pb exposure results in epigenetic changes that affect multigenerational gene expression. Exposure to Pb has decreased through primary prevention, including removal of Pb solder from canned food, regulating lead-based paint, and especially eliminating Pb additives in gasoline. While researchers observe a continuous decline in children’s blood lead (BPb), reservoirs of exposure persist in topsoil, which stores the legacy dust from leaded gasoline and other sources. Our surveys of metropolitan New Orleans reveal that median topsoil Pb in communities (n = 274) decreased 44% from 99 mg/kg to 54 mg/kg (P value of 2.09 × 10−08), with a median depletion rate of ∼2.4 mg·kg·y−1 over 15 y. From 2000 through 2005 to 2011 through 2016, children’s BPb declined from 3.6 μg/dL to 1.2 μg/dL or 64% (P value of 2.02 × 10−85), a decrease of ∼0.2 μg·dL·y−1 during a median of 12 y. Here, we explore the decline of children’s BPb by examining a metabolism of cities framework of inputs, transformations, storages, and outputs. Our findings indicate that decreasing Pb in topsoil is an important factor in the continuous decline of children’s BPb. Similar reductions are expected in other major US cities. The most contaminated urban communities, usually inhabited by vulnerable populations, require further reductions of topsoil Pb to fulfill primary prevention for the nation’s children

    DeepFusion: real-time dense 3D reconstruction for monocular SLAM using single-view depth and gradient predictions

    No full text
    While the keypoint-based maps created by sparsemonocular Simultaneous Localisation and Mapping (SLAM)systems are useful for camera tracking, dense 3D recon-structions may be desired for many robotic tasks. Solutionsinvolving depth cameras are limited in range and to indoorspaces, and dense reconstruction systems based on minimisingthe photometric error between frames are typically poorlyconstrained and suffer from scale ambiguity. To address theseissues, we propose a 3D reconstruction system that leverages theoutput of a Convolutional Neural Network (CNN) to producefully dense depth maps for keyframes that include metric scale.Our system, DeepFusion, is capable of producing real-timedense reconstructions on a GPU. It fuses the output of a semi-dense multiview stereo algorithm with the depth and gradientpredictions of a CNN in a probabilistic fashion, using learneduncertainties produced by the network. While the network onlyneeds to be run once per keyframe, we are able to optimise forthe depth map with each new frame so as to constantly makeuse of new geometric constraints. Based on its performanceon synthetic and real world datasets, we demonstrate thatDeepFusion is capable of performing at least as well as othercomparable systems

    Towards the probabilistic fusion of learned priors into standard pipelines for 3D reconstruction

    Get PDF
    The best way to combine the results of deep learning with standard 3D reconstruction pipelines remains an open problem. While systems that pass the output of traditional multi-view stereo approaches to a network for regularisation or refinement currently seem to get the best results, it may be preferable to treat deep neural networks as separate components whose results can be probabilistically fused into geometry- based systems. Unfortunately, the error models required to do this type of fusion are not well understood, with many different approaches being put forward. Recently, a few systems have achieved good results by having their networks predict probability distributions rather than single values. We propose using this approach to fuse a learned single-view depth prior into a standard 3D reconstruction system. Our system is capable of incrementally producing dense depth maps for a set of keyframes. We train a deep neural network to predict discrete, nonparametric probability distributions for the depth of each pixel from a single image. We then fuse this "probability volume" with another probability volume based on the photometric consistency between subsequent frames and the keyframe image. We argue that combining the probability volumes from these two sources will result in a volume that is better conditioned. To extract depth maps from the volume, we minimise a cost function that includes a regularisation term based on network predicted surface normals and occlusion boundaries. Through a series of experiments, we demonstrate that each of these components improves the overall performance of the system
    corecore