9 research outputs found
The flow of baseline estimation using a single omnidirectional camera
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial
Improved Observation and Communication with a Distributed Compound Vision Surveillance System
Abstrac
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensorâs projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances
Applications de la vision omnidirectionnelle Ă la perception de scĂšnes pour des systĂšmes mobiles
Ce mĂ©moire prĂ©sente une synthĂšse des travaux que jâai menĂ©s Ă lâESIGELEC au sein de son institut de recherche lâIRSEEM. Mes activitĂ©s de recherche ont portĂ© dans un premier temps sur la conception et lâĂ©valuation de dispositifs de mesure de la dynamique de la marche de personnes atteintes de pathologies de la hanche, dans le cadre de ma thĂšse effectuĂ©e Ă lâuniversitĂ© de Rouen en lien le Centre Hospitalo-Universitaire de Rouen. En 2003, jâai rejoint les Ă©quipes de recherche qui se constituaient avec la mise sur pieds de lâIRSEEM, Institut de Recherche en SystĂšmes Electroniques EmbarquĂ©s, crĂ©Ă© en 2001. Dans ce laboratoire, jâai structurĂ© et dĂ©veloppĂ© une activitĂ© de recherche dans le domaine de la vision par ordinateur appliquĂ©e au vĂ©hicule intelligent et Ă la robotique mobile autonome. Dans un premier temps, jâai concentrĂ© mes travaux Ă lâĂ©tude de systĂšmes de vision omnidirectionnelle tels que les capteurs catadioptriques centraux et leur utilisation pour des applications mobiles embarquĂ©es ou dĂ©barquĂ©es : modĂ©lisation et calibrage, reconstruction tridimensionnelle de scĂšnes par stĂ©rĂ©ovision et dĂ©placement du capteur. Dans un second temps, je me suis intĂ©ressĂ© Ă la conception et la mise en Ćuvre de systĂšmes de vision Ă projection non centrale (capteurs catadioptriques Ă miroirs composĂ©s, camĂ©ra plĂ©noptique). Ces travaux ont Ă©tĂ© effectuĂ©s au travers en collaboration avec le MIS de lâUniversitĂ© Picardie Jules Verne et lâISIR de lâUniversitĂ© Pierre et Marie Curie. Enfin, dans le cadre dâun programme de recherche en collaboration avec lâUniversitĂ© du Kent, jâai consacrĂ© une partie de mes travaux Ă lâadaptation de mĂ©thodes de traitement dâimages et de classification pour la dĂ©tection de visages sur images omnidirectionnelles (adaptation du dĂ©tecteur de Viola et Jones) et Ă la reconnaissance biomĂ©trique dâune personne par analyse de sa marche. Aujourdâhui, mon activitĂ© sâinscrit dans le prolongement du renforcement des projets de lâIRSEEM dans le domaine de la robotique mobile et du vĂ©hicule autonome : mise en place dâun plateau de mesures pour la navigation autonome, coordination de projets de recherche en prise avec les besoins industriels. Mes perspectives de recherche ont pour objet lâĂ©tude de nouvelles solutions pour la perception du mouvement et la localisation en environnement extĂ©rieur et sur les mĂ©thodes et moyens nĂ©cessaires pour objectiver la performance et la robustesse de ces solutions sur des scĂ©narios rĂ©alistes
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications
An omnidirectional stereo vision system using a single camera
We describe a new omnidirectional stereo imaging system that uses a concave lens and a convex mirror to produce a stereo pair of images on the sensor of a conventional camera. The light incident from a scene point is split and directed to the camera in two parts. One part reaches camera directly after reflection from the convex mirror and forms a single-viewpoint omnidirectional image. The second part is formed by passing a subbeam of the reflected light from the mirror through a concave lens and forms a displaced single viewpoint image where the disparity depends on the depth of the scene point. A closed-form expression for depth is derived. Since the optical components used are simple and commercially available, the resulting system is compact and inexpensive. This, and the simplicity of the required image processing algorithms, make the proposed system attractive for real-time applications, such as autonomous navigation and object manipulation. The experimental prototype we have built is described. 1
CONTRIBUTION A LA STEREOVISION OMNIDIRECTIONNELLE ET AU TRAITEMENT DES IMAGES CATADIOPTRIQUES : APPLICATION AUX SYSTEMES AUTONOMES
Computer vision and digital image processing are two disciplines aiming to endow computers with a sense of perception and image analysis, similar to that of humans. Artificial visual perception can be greatly enhanced when a large field of view is available. This thesis deals with the use of omnidirectional cameras as a mean of expanding the field of view of computer vision systems. The visual perception of depth (3D) by means of omnistereo configurations, and special processing algorithms adapted to catadioptric images, are the main subjects studied in this thesis. Firstly a survey on 3D omnidirectional vision systems is conducted. It highlights the main approaches for obtaining depth information, and provides valuable indications for the choice of the configuration according to the application requirements. Then the design of an omnistereo sensor is addressed, we present a new configuration of the proposed sensor formed by a unique catadioptric camera, dedicated to robotic applications. An experimental investigation of depth estimation accuracy was conducted to validate the new configuration.Digital images acquired by catadioptric cameras present various special geometrical proprieties, such as non-uniform resolution and severe radial distortions. The application of conventional algorithms to process such images is limited in terms of performance. For that, new algorithms adapted to the spherical geometry of catadioptric images have been developed.Gathered omnidirectional computer vision techniques were finally used in two real applications. The first concerns the integration of catadioptric cameras to a mobile robot. The second focuses on the design of a solar tracker, based on a catadioptric camera.The results confirm that the adoption of such sensors for autonomous systems offer more performance and flexibility in regards to conventional sensors.La vision par ordinateur est une discipline qui vise doter les ordinateurs dâun sens de perception et dâanalyse d'image semblable Ă celui de lâhomme. La perception visuelle artificielle peut ĂȘtre grandement amĂ©liorĂ©e quand un grand champ de vision est disponible. Cette thĂšse traite de l'utilisation des camĂ©ras omnidirectionnelles comme un moyen d'Ă©largir le champ de vision des systĂšmes de vision artificielle. La perception visuelle de la profondeur (3D) par le biais de configurations omnistĂ©rĂ©o, et les algorithmes de traitement adaptĂ©s aux images catadioptriques, sont les principaux sujets Ă©tudiĂ©s.Tout d'abord une Ă©tude des systĂšmes de vision omnidirectionnelle 3D est menĂ©e. Elle met en Ă©vidence les principales approches pour obtenir lâinformation sur la profondeur et fournit des indications prĂ©cieuses sur le choix de la configuration en fonction des besoins de l'application. Ensuite, la conception d'un capteur omnistĂ©rĂ©o est adressĂ©e ; nous prĂ©sentons une nouvelle configuration du capteur proposĂ© basĂ© une camĂ©ra catadioptrique unique, et dĂ©diĂ© Ă la robotique mobile. Des expĂ©rimentations sur la prĂ©cision dâestimation de la profondeur ont Ă©tĂ© menĂ©es pour valider la nouvelle configuration. Les images catadioptriques prĂ©sentent diverses propriĂ©tĂ©s gĂ©omĂ©triques particuliĂšres, telles que la rĂ©solution non-uniforme et de fortes distorsions radiales. Lâapplication des algorithmes de traitement classiques Ă ce type dâimages se trouve limitĂ© en termes de performances. Dans ce sens, de nouveaux algorithmes adaptĂ©s Ă la gĂ©omĂ©trie sphĂ©rique de ces images ont Ă©tĂ© dĂ©veloppĂ©s.Les techniques de vision omnidirectionnelle artificielle recueillies ont Ă©tĂ© finalement exploitĂ©es dans deux applications rĂ©elles. La premiĂšre concerne lâintĂ©gration des camĂ©ras catadioptriques Ă un robot mobile. La seconde porte sur la conception dâun suiveur solaire, Ă base dâune camĂ©ra catadioptrique.Les rĂ©sultats obtenus confirment que lâadoption de tels capteurs pour les systĂšmes autonomes offre plus de performances et de flexibilitĂ© en regards aux capteurs classiques
Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems
We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis.
To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the âdirectâ tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS.
We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes