281 research outputs found

    Vision-Based Navigation III: Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

    Full text link
    An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed. In previous paper, such algorithm for regular camera was considered. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. In this paper, these constraints are extended to handle non-central projection, as is the case with many omnidirectional systems. The utilization of omnidirectional data is shown to improve the robustness and accuracy of the navigation algorithm. The feasibility of this algorithm is established through lab experimentation with two kinds of omnidirectional acquisition systems. The first one is polydioptric cameras while the second is catadioptric camera.Comment: 6 pages, 9 figure

    Efficient generic calibration method for general cameras with single centre of projection

    Get PDF
    Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method

    Exploiting line metric reconstruction from non-central circular panoramas

    Get PDF
    In certain non-central imaging systems, straight lines are projected via a non-planar surface encapsulating the 4 degrees of freedom of the 3D line. Consequently the geometry of the 3D line can be recovered from a minimum of four image points. However, with classical non-central catadioptric systems there is not enough effective baseline for a practical implementation of the method. In this paper we propose a multi-camera system configuration resembling the circular panoramic model which results in a particular non-central projection allowing the stitching of a non-central panorama. From a single panorama we obtain well-conditioned 3D reconstruction of lines, which are specially interesting in texture-less scenarios. No previous information about the direction or arrangement of the lines in the scene is assumed. The proposed method is evaluated on both synthetic and real images

    Fitting line projections in non-central catadioptric cameras with revolution symmetry

    Get PDF
    Line-images in non-central cameras contain much richer information of the original 3D line than line projections in central cameras. The projection surface of a 3D line in most catadioptric non-central cameras is a ruled surface, encapsulating the complete information of the 3D line. The resulting line-image is a curve which contains the 4 degrees of freedom of the 3D line. That means a qualitative advantage with respect to the central case, although extracting this curve is quite difficult. In this paper, we focus on the analytical description of the line-images in non-central catadioptric systems with symmetry of revolution. As a direct application we present a method for automatic line-image extraction for conical and spherical calibrated catadioptric cameras. For designing this method we have analytically solved the metric distance from point to line-image for non-central catadioptric systems. We also propose a distance we call effective baseline measuring the quality of the reconstruction of a 3D line from the minimum number of rays. This measure is used to evaluate the different random attempts of a robust scheme allowing to reduce the number of trials in the process. The proposal is tested and evaluated in simulations and with both synthetic and real images

    A Factorization Based Self-Calibration for Radially Symmetric Cameras

    Get PDF
    The paper proposes a novel approach for planar selfcalibration of radially symmetric cameras. We model these camera images using notions of distortion center and concentric distortion circles around it. The rays corresponding to pixels lying on a single distortion circle form a right circular cone. Each of these cones is associated with two unknowns; optical center and focal length (opening angle). In the central case, we consider all distortion circles to have the same optical center, whereas in the non-central case they have different optical centers lying on the same optical axis. Based on this model we provide a factorization based self-calibration algorithm for planar scenes from dense image matches. Our formulation provides a rich set of constraints to validate the correctness of the distortion center. We also propose possible extensions of this algorithm i

    04251 -- Imaging Beyond the Pinhole Camera

    Get PDF
    From 13.06.04 to 18.06.04, the Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    corecore