38 research outputs found
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
3D Scene Geometry Estimation from 360 Imagery: A Survey
This paper provides a comprehensive survey on pioneer and state-of-the-art 3D
scene geometry estimation methodologies based on single, two, or multiple
images captured under the omnidirectional optics. We first revisit the basic
concepts of the spherical camera model, and review the most common acquisition
technologies and representation formats suitable for omnidirectional (also
called 360, spherical or panoramic) images and videos. We then survey
monocular layout and depth inference approaches, highlighting the recent
advances in learning-based solutions suited for spherical data. The classical
stereo matching is then revised on the spherical domain, where methodologies
for detecting and describing sparse and dense features become crucial. The
stereo matching concepts are then extrapolated for multiple view camera setups,
categorizing them among light fields, multi-view stereo, and structure from
motion (or visual simultaneous localization and mapping). We also compile and
discuss commonly adopted datasets and figures of merit indicated for each
purpose and list recent results for completeness. We conclude this paper by
pointing out current and future trends.Comment: Published in ACM Computing Survey
Comparing of radial and tangencial geometric for cylindric panorama
Cameras generally have a field of view only large enough to capture a portion of their surroundings. The goal of immersion is to replace many of your senses with virtual ones, so that the virtual environment will feel as real as possible. Panoramic cameras are used to capture the entire 360°view, also known as panoramic images.Virtual reality makes use of these panoramic images to provide a more immersive experience compared to seeing images on a 2D screen. This thesis, which is in the field of Computer vision, focuses on establishing a multi-camera geometry to generate a cylindrical panorama image and successfully implementing it with the cheapest cameras possible. The specific goal of this project is to propose the cameras geometry which will decrease artifact problems related to parallax in the panorama image. We present a new approach of cylindrical panoramic images from multiple cameras which its setup has cameras placed evenly around a circle. Instead of looking outward, which is the traditional ”radial” configuration, we propose to make the optical axes tangent to the camera circle, a ”tangential” configuration. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditionsLes caméras ont généralement un champ de vision à peine assez grand pour capturer partie de leur environnement. L’objectif de l’immersion est de remplacer virtuellement un grand nombre de sens, de sorte que l’environnement virtuel soit perçu comme le plus réel possible. Une caméra panoramique est utilisée pour capturer l’ensemble d’une vue 360°, également connue sous le nom d’image panoramique. La réalité virtuelle fait usage de ces images panoramiques pour fournir une expérience plus immersive par rapport aux images sur un écran 2D. Cette thèse, qui est dans le domaine de la vision par ordinateur, s’intéresse à la création d’une géométrie multi-caméras pour générer une image cylindrique panoramique et vise une mise en œuvre avec les caméras moins chères possibles. L’objectif spécifique de ce projet est de proposer une géométrie de caméra qui va diminuer au maximum les problèmes d’artefacts liés au parallaxe présent dans l’image panoramique. Nous présentons une nouvelle approche de capture des images panoramiques cylindriques à partir de plusieurs caméras disposées uniformément autour d’un cercle. Au lieu de regarder vers l’extérieur, ce qui est la configuration traditionnelle ”radiale”, nous proposons de rendre les axes optiques tangents au cercle des caméras, une configuration ”tangentielle”. Outre une analyse et la comparaison des géométries radiales et tangentielles, nous fournissons un montage expérimental avec de vrais panoramas obtenus dans des conditions réaliste
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Single-channel stereoscopic imaging system using rotating deflector
Dept. of Biomedical Engineering/석사In a conventional dual-channel stereoscopic imaging system (SIS), two cameras are often used to take images at different visual orientations, creating a three-dimensional (3D) image. Because two cameras are used, visual fatigue may be caused by differences between the cameras involving temporal synchronization, geometrical calibration, and color balance. Furthermore, owing to its mechanical composition, the imaging system is generally bulky.To eliminate the possible limitations of current conventional dual-camera SISs, research was conducted to develop a 3D SIS using a single camera. Its purpose is to create image disparity (ID), a key factor in producing stereoscopic images. Using a transparent rotating deflector (TRD), ID was mimicked assuming that light refraction through the TRD would create the necessary ID.First, the system’s efficacy was tested using a thorough simulation and experiment based on Snell’s law. Light propagation through the TRD was modeled using ZEMAX. The ID was calculated for various TRD refractive indices and thicknesses. On the basis of the simulation and calculation, a TRD-based SIS (TRD-SIS) was developed using manual rotation of the TRD. Second, a real-time TRD-SIS was set up to allow real-time stereoscopic imaging and display. A complementary metal–oxide–semiconductor (CMOS) camera was used along with a stepping motor controlled by a microcontroller unit. The acquiredimages were visualized in 3D using an active 3D method. Finally, the system was evaluated in terms of two factors: (1) temperature generation and (2) the image characteristics. The temperature changes in the optical components were measured at the motor surface and motor driver. The image characteristics were evaluated by calculating the coefficient of variation of acquired images of a white reflectance target. In addition, a method of controlling heat generation
using a heat sink and motor fan was devised.ope
Calibrage et modélisation d’un système de stéréovision hybride et panoramique
Dans cette thèse nos contributions à la résolution de deux problématiques rencontrées en vision numérique et en photogrammétrie, qui sont le calibrage de caméras et la stéréovision, sont présentées. Ces deux problèmes font l’objet de très nombreuses recherches depuis plusieurs années. Les techniques de calibrage existantes diffèrent beaucoup suivant le type de caméras à calibrer (classique ou panoramique, à focale fixe ou à focale variable, ..). Notre première contribution est un banc de calibrage, à l’aide des éléments d’optique diffractive, qui permet de calibrer avec une bonne précision une très grande partie des caméras existantes. Un modèle simple et précis qui décrit la projection de la grille formée sur l’image et une méthode de calibrage pour chaque type de caméras est proposé. La technique est très robuste et les résultats pour l’ensemble des caméras calibrées sont optimaux. Avec la multiplication des types de caméras et la diversité des modèles de projections, un modèle de formation d'image générique semble très intéressant. Notre deuxième contribution est un modèle de projection unifié pour plusieurs caméras classiques et panoramiques. Dans ce modèle, toute caméra est modélisée par une projection rectiligne et des splines cubiques composées permettant de représenter toutes sortes de distorsions. Cette approche permet de modéliser géométriquement les systèmes de stéréovision mixtes ou panoramiques et de convertir une image panoramique en une image classique. Par conséquent, le problème de stéréovision mixte ou panoramique est transformé en un problème de stéréovision conventionnelle. Mots clés : calibrage, vision panoramique, distorsion, fisheye, zoom, panomorphe, géométrie épipolaire, reconstruction tridimensionnelle, stéréovision hybride, stéréovision panoramique.This thesis aims to present our contributions to the resolution of two problems encountered in the field of computer vision and photogrammetry, which are camera calibration and stereovision. These two problems have been extensively studied in the last years. Different camera calibration techniques have been developed in the literature depending on the type of camera (classical or panoramic, with zoom lens or fixed lens..). Our first contribution is a compact and accurate calibration setup, based on diffractive optical elements, which is suitable for different kind of cameras. The technique is very robust and optimal results were achieved for different types of cameras. With the multiplication of camera types and the diversity of the projection models, a generic model has become very interesting. Our second contribution is a generic model, which is suitable for conventional and panoramic cameras. In this model, composed cubic splines functions provide more realistic model of both radial and tangential distortions. Such an approach allows to model either hybrid or panoramic stereovision system and to convert panoramic image to classical image. Consequently, the processing challenges of a hybrid stereovision system or a panoramic stereovision system are turned into simple classical stereovision problems. Keywords: Calibration, panoramic vision, distortions, fisheye, zoom, panomorph, epipolar geometry, three-dimensional reconstruction, hybrid stereovision, panoramic stereovision