642 research outputs found

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach

    04251 -- Imaging Beyond the Pinhole Camera

    Get PDF
    From 13.06.04 to 18.06.04, the Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Full text link
    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances

    Efficient generic calibration method for general cameras with single centre of projection

    Get PDF
    Generic camera calibration is a non-parametric calibration technique that is applicable to any type of vision sensor. However, the standard generic calibration method was developed with the goal of generality and it is therefore sub-optimal for the common case of cameras with a single centre of projection (e.g. pinhole, fisheye, hyperboloidal catadioptric). This paper proposes novel improvements to the standard generic calibration method for central cameras that reduce its complexity, and improve its accuracy and robustness. Improvements are achieved by taking advantage of the geometric constraints resulting from a single centre of projection. Input data for the algorithm is acquired using active grids, the performance of which is characterised. A new linear estimation stage to the generic algorithm is proposed incorporating classical pinhole calibration techniques, and it is shown to be significantly more accurate than the linear estimation stage of the standard method. A linear method for pose estimation is also proposed and evaluated against the existing polynomial method. Distortion correction and motion reconstruction experiments are conducted with real data for a hyperboloidal catadioptric sensor for both the standard and proposed methods. Results show the accuracy and robustness of the proposed method to be superior to those of the standard method

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, InnovaciĂłn y Universidades | Ref. PID2019-108816RB-I0

    Comparing of radial and tangencial geometric for cylindric panorama

    Full text link
    Cameras generally have a field of view only large enough to capture a portion of their surroundings. The goal of immersion is to replace many of your senses with virtual ones, so that the virtual environment will feel as real as possible. Panoramic cameras are used to capture the entire 360°view, also known as panoramic images.Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. This thesis, which is in the field of Computer vision, focuses on establishing a multi-camera geometry to generate a cylindrical panorama image and successfully implementing it with the cheapest cameras possible. The specific goal of this project is to propose the cameras geometry which will decrease artifact problems related to parallax in the panorama image. We present a new approach of cylindrical panoramic images from multiple cameras which its setup has cameras placed evenly around a circle. Instead of looking outward, which is the traditional ”radial” configuration, we propose to make the optical axes tangent to the camera circle, a ”tangential” configuration. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditionsLes camĂ©ras ont gĂ©nĂ©ralement un champ de vision Ă  peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus rĂ©el possible. Une camĂ©ra panoramique est utilisĂ©e pour capturer l’ensemble d’une vue 360°, Ă©galement connue sous le nom d’image panoramique. La rĂ©alitĂ© virtuelle fait usage de ces images panoramiques pour fournir une expĂ©rience plus immersive par rapport aux images sur un Ă©cran 2D. Cette thĂšse, qui est dans le domaine de la vision par ordinateur, s’intĂ©resse Ă  la crĂ©ation d’une gĂ©omĂ©trie multi-camĂ©ras pour gĂ©nĂ©rer une image cylindrique panoramique et vise une mise en Ɠuvre avec les camĂ©ras moins chĂšres possibles. L’objectif spĂ©cifique de ce projet est de proposer une gĂ©omĂ©trie de camĂ©ra qui va diminuer au maximum les problĂšmes d’artefacts liĂ©s au parallaxe prĂ©sent dans l’image panoramique. Nous prĂ©sentons une nouvelle approche de capture des images panoramiques cylindriques Ă  partir de plusieurs camĂ©ras disposĂ©es uniformĂ©ment autour d’un cercle. Au lieu de regarder vers l’extĂ©rieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des camĂ©ras, une configuration ”tangentielle”. Outre une analyse et la comparaison des gĂ©omĂ©tries radiales et tangentielles, nous fournissons un montage expĂ©rimental avec de vrais panoramas obtenus dans des conditions rĂ©aliste

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
    • 

    corecore