7 research outputs found

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Omnidirectional Camera Model and Epipolar Geometry Estimation by RANSAC with bucketing

    No full text
    Abstract. We present a robust method of image points sampling used in ransac for a class of omnidirectional cameras (view angle above 180 ◦) possessing central projection to obtain simultaneous estimation of a camera model and epipolar geometry. We focus on problem arising in ransac based estimation technique for omnidirectional images when the most of correspondences are established near the center of view field. Such correspondences satisfy the camera model for almost any degree of an image non-linearity. They are often selected in ransac as inliers, estimation stops prematurely, the most informative points near the border of the view field are not used, and incorrect camera model is estimated. We show that a remedy to this problem is achieved by not using points near the center of the view field circle for camera model estimation and controlling the points sampling in ransac. The camera calibration is done from image correspondences only, without any calibration objects or any assumption about the scene except for rigidity. We demonstrate our method in real experiments with high quality but cheap and widely available Nikon FC–E8 fish-eye lens.
    corecore