55 research outputs found

    Scaled Monocular Visual SLAM

    Get PDF
    The fundamental shortcoming underlying monocular-based localizationand mapping solutions (SfM, Visual SLAM) is the fact thatthe obtained maps and motion are solved up to an unknown scale.Yet, the literature provides interesting solutions to scale estimationusing cues from focus or defocus of a camera. In this paper, wetake advantage of the scale offered by image focus to properly initializeVisual SLAM with a correct metric scale. We provide experimentsshowing the success of the proposed method and discussits limitations

    GSLAM: Initialization-robust Monocular Visual SLAM via Global Structure-from-Motion

    Full text link
    Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.Comment: 3DV 2017 Project Page: https://frobelbest.github.io/gsla

    Position and motion estimation for visual robot control with planar targets

    Get PDF
    This paper addresses two problems in visually-controlled robots. The first consists of positioning the end-effector of a robot manipulator on a plane of interest by using a monocular vision system. The problem amounts to estimating the transformation between the coordinates of an image point and its three-dimensional location supposing that only the camera intrinsic parameters are known. The second problem consists of positioning the robot end-effector with respect to an object of interest free to move on a plane, and amounts to estimating the camera displacement in a stereo vision system in the presence of motion constraints. For these problems, some solutions are proposed through dedicated optimizations based on decoupling the effects of rotation and translation and based on an a-priori imposition of the degrees of freedom of the system. These solutions are illustrated via simulations and experiments. ©2009 ACA.published_or_final_versionThe 7th Asian Control Conference (ASCC 2009), Hong Kong, China, 27-29 August 2009. In Proceedings of the Asian Control Conference, 2009, p. 372-37

    Estimation of the rigid transformation between two cameras from the Fundamental Matrix VS from Homographies.

    Get PDF
    The 3D reconstruction is an important step for the analytical calculation of the Jacobian of the image in a process of visual control of robots. In a two-camera stereo system that reconstruction depends on the knowledge of the rigid transformation between the two cameras and is represented by the rotation and translation between them. These two parameters are the result of a calibration of the stereo pair, but can also be retrieved from the epipolar geometry of the system, or from a homography obtained by features belonging to a flat object in the scene. In this paper, we make an assessment of the latter two alternatives, taking as reference an Euclidean reconstruction eliminating image distortion. We analyze three cases: the distortion inherent in the camera is corrected, without corrected distortion, and when Gaussian noise is added to the detection of features

    Position and motion estimation for visual robot control with planar targets

    Get PDF
    This paper addresses two problems in visually-controlled robots. The first consists of positioning the end-effector of a robot manipulator on a plane of interest by using a monocular vision system. The problem amounts to estimating the transformation between the coordinates of an image point and its three-dimensional location supposing that only the camera intrinsic parameters are known. The second problem consists of positioning the robot end-effector with respect to an object of interest free to move on a plane, and amounts to estimating the camera displacement in a stereo vision system in the presence of motion constraints. For these problems, some solutions are proposed through dedicated optimizations based on decoupling the effects of rotation and translation and based on an a-priori imposition of the degrees of freedom of the system. These solutions are illustrated via simulations and experiments. ©2009 ACA.published_or_final_versionThe 7th Asian Control Conference (ASCC 2009), Hong Kong, China, 27-29 August 2009. In Proceedings of the Asian Control Conference, 2009, p. 372-37
    corecore