40,894 research outputs found

    Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments

    Full text link
    Existing simultaneous localization and mapping (SLAM) algorithms are not robust in challenging low-texture environments because there are only few salient features. The resulting sparse or semi-dense map also conveys little information for motion planning. Though some work utilize plane or scene layout for dense map regularization, they require decent state estimation from other sources. In this paper, we propose real-time monocular plane SLAM to demonstrate that scene understanding could improve both state estimation and dense mapping especially in low-texture environments. The plane measurements come from a pop-up 3D plane model applied to each single image. We also combine planes with point based SLAM to improve robustness. On a public TUM dataset, our algorithm generates a dense semantic 3D model with pixel depth error of 6.2 cm while existing SLAM algorithms fail. On a 60 m long dataset with loops, our method creates a much better 3D model with state estimation error of 0.67%.Comment: International Conference on Intelligent Robots and Systems (IROS) 201

    Performance prediction of point-based three-dimensional volumetric measurement systems

    Get PDF
    Point-based three-dimensional volumetric measurement systems are defined as multi-view vision systems which reconstruct a three-dimensional scene by first identifying key points on the views and then performing the reconstruction. Examples of these are defocusing digital particle image velocimetry (DDPIV) (Pereira et al 2000 Exp. Fluids 29 S78–84) and 3D particle tracking velocimetry (3DPTV) (Papantoniou and Maas 1990 5th Int. Symp. on the Application of Laser Techniques in Fluid Mechanics) which reconstruct clouds of flow tracers in order to estimate flow velocities. The reconstruction algorithms in these systems are variations of an epipolar line search. This paper presents a generalized error analysis of such methods, both in reconstruction precision (error in the reconstructed scene) and reconstruction quality (number of ambiguities or 'ghosts' produced)

    Defocusing digital particle image velocimetry and the three-dimensional characterization of two-phase flows

    Get PDF
    Defocusing digital particle image velocimetry (DDPIV) is the natural extension of planar PIV techniques to the third spatial dimension. In this paper we give details of the defocusing optical concept by which scalar and vector information can be retrieved within large volumes. The optical model and computational procedures are presented with the specific purpose of mapping the number density, the size distribution, the associated local void fraction and the velocity of bubbles or particles in two-phase flows. Every particle or bubble is characterized in terms of size and of spatial coordinates, used to compute a true three-component velocity field by spatial three-dimensional cross-correlation. The spatial resolution and uncertainty limits are established through numerical simulations. The performance of the DDPIV technique is established in terms of number density and void fraction. Finally, the velocity evaluation methodology, using the spatial cross-correlation technique, is described and discussed in terms of velocity accuracy

    Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements

    Get PDF
    Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite's relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter's efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver

    SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion

    Get PDF
    Active depth cameras suffer from several limitations, which cause incomplete and noisy depth maps, and may consequently affect the performance of RGB-D Odometry. To address this issue, this paper presents a visual odometry method based on point and line features that leverages both measurements from a depth sensor and depth estimates from camera motion. Depth estimates are generated continuously by a probabilistic depth estimation framework for both types of features to compensate for the lack of depth measurements and inaccurate feature depth associations. The framework models explicitly the uncertainty of triangulating depth from both point and line observations to validate and obtain precise estimates. Furthermore, depth measurements are exploited by propagating them through a depth map registration module and using a frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D reprojection errors, independently. Results on RGB-D sequences captured on large indoor and outdoor scenes, where depth sensor limitations are critical, show that the combination of depth measurements and estimates through our approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
    corecore