Long-term motion estimation from images

Abstract

Summary. Cameras are promising sensors for estimating the motion of autonomous vehicles without GPS and for automatic scene modeling. Furthermore, a wide variety of shape-from-motion algorithms exist for simultaneously estimating the camera’s six degree of freedom motion and the three-dimension structure of the scene, without prior assumptions about the camera’s motion or an existing map of the scene. However, existing shape-from-motion algorithms do not address the problem of accumulated long-term drift in the estimated motion and scene structure, which is critical in autonomous vehicle applications. The paper introduces a proof of concept system that exploits a new tracker, the variable state dimension filter (VSDF), and SIFT keypoints to recognize previously visited locations and limit drift in long-term camera motion estimates. The performance of this system on an extended image sequence is described

    Similar works

    Full text

    thumbnail-image

    Available Versions