31 research outputs found

    Fitting line projections in non-central catadioptric cameras with revolution symmetry

    Get PDF
    Line-images in non-central cameras contain much richer information of the original 3D line than line projections in central cameras. The projection surface of a 3D line in most catadioptric non-central cameras is a ruled surface, encapsulating the complete information of the 3D line. The resulting line-image is a curve which contains the 4 degrees of freedom of the 3D line. That means a qualitative advantage with respect to the central case, although extracting this curve is quite difficult. In this paper, we focus on the analytical description of the line-images in non-central catadioptric systems with symmetry of revolution. As a direct application we present a method for automatic line-image extraction for conical and spherical calibrated catadioptric cameras. For designing this method we have analytically solved the metric distance from point to line-image for non-central catadioptric systems. We also propose a distance we call effective baseline measuring the quality of the reconstruction of a 3D line from the minimum number of rays. This measure is used to evaluate the different random attempts of a robust scheme allowing to reduce the number of trials in the process. The proposal is tested and evaluated in simulations and with both synthetic and real images

    A Fisher-Rao metric for paracatadioptric images of lines

    Get PDF
    In a central paracatadioptric imaging system a perspective camera takes an image of a scene reflected in a paraboloidal mirror. A 360° field of view is obtained, but the image is severely distorted. In particular, straight lines in the scene project to circles in the image. These distortions make it diffcult to detect projected lines using standard image processing algorithms. The distortions are removed using a Fisher-Rao metric which is defined on the space of projected lines in the paracatadioptric image. The space of projected lines is divided into subsets such that on each subset the Fisher-Rao metric is closely approximated by the Euclidean metric. Each subset is sampled at the vertices of a square grid and values are assigned to the sampled points using an adaptation of the trace transform. The result is a set of digital images to which standard image processing algorithms can be applied. The effectiveness of this approach to line detection is illustrated using two algorithms, both of which are based on the Sobel edge operator. The task of line detection is reduced to the task of finding isolated peaks in a Sobel image. An experimental comparison is made between these two algorithms and third algorithm taken from the literature and based on the Hough transform

    OmniSCV: An omnidirectional synthetic image generator for computer vision

    Get PDF
    Omnidirectional and 360º images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas

    Exploiting line metric reconstruction from non-central circular panoramas

    Get PDF
    In certain non-central imaging systems, straight lines are projected via a non-planar surface encapsulating the 4 degrees of freedom of the 3D line. Consequently the geometry of the 3D line can be recovered from a minimum of four image points. However, with classical non-central catadioptric systems there is not enough effective baseline for a practical implementation of the method. In this paper we propose a multi-camera system configuration resembling the circular panoramic model which results in a particular non-central projection allowing the stitching of a non-central panorama. From a single panorama we obtain well-conditioned 3D reconstruction of lines, which are specially interesting in texture-less scenarios. No previous information about the direction or arrangement of the lines in the scene is assumed. The proposed method is evaluated on both synthetic and real images

    Parallel Lines for Calibration of Non-Central Conical Catadioptric Cameras

    Get PDF
    In this paper we propose a new calibration method for non-central catadioptric cameras that use a conical mirror. This method consists of using parallel lines, extracted from a single omnidirectional image, instead of using the typical checkerboard to obtain the calibration parameters of the system

    Atlanta scaled layouts from non-central panoramas

    Get PDF
    In this work we present a novel approach for 3D layout recovery of indoor environments using a non-central acquisition system. From a single non-central panorama, full and scaled 3D lines can be independently recovered by geometry reasoning without additional nor scale assumptions. However, their sensitivity to noise and complex geometric modeling has led these panoramas and required algorithms being little investigated. Our new pipeline aims to extract the boundaries of the structural lines of an indoor environment with a neural network and exploit the properties of non-central projection systems in a new geometrical processing to recover scaled 3D layouts. The results of our experiments show that we improve state-of-the-art methods for layout recovery and line extraction in non-central projection systems. We completely solve the problem both in Manhattan and Atlanta environments, handling occlusions and retrieving the metric scale of the room without extra measurements. As far as the authors’ knowledge goes, our approach is the first work using deep learning on non-central panoramas and recovering scaled layouts from single panoramas

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Plane-Based Calibration of Central Catadioptric Cameras

    Get PDF
    International audienceWe present a novel calibration technique for all central catadioptric cameras using images of planar grids. We adopted the well-known sphere camera model to describe the catadioptric projection. We show that, using the so-called lifted coordinates, a linear relation mapping the grid points to the corresponding points on the image plane can be written as a 6 × 6 matrix Hcata , which acts like the classical 3 × 3 ho- mography for perspective cameras. We show how to compute the image of the absolute conic (IAC) from at least 3 homo- graphies and how to recover from it the intrinsic parameters of the catadioptric camera. In the case of paracatadioptric cameras one such homography is enough to estimate the IAC, thus allowing the calibration from a single image
    corecore