1 research outputs found

    FUSING VIDEO AND SPARSE DEPTH DATA IN STRUCTURE FROM MOTION

    No full text
    This paper considers the geometric constraints to combine structure from motion with a sparse set of depth measurements. The goal is to improve the motion estimation for autonomous navigation, and to increase the fidelity of reconstructed 3D scene models. The system is implemented on an iRobot-B21r Robot with a video camera and a planar laser range finder which gives relatively accurate depth measurements of a small set of scene points. Using a probabilistic model of scene smoothness, the depth information is used to modify the classical epipolar error function to simultaneously incorporate data from both sensors. We present the results of real-world experiments and experiment with different prior assumptions about the scene structure. 1
    corecore