45 research outputs found

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Catadioptric stereo-vision system using a spherical mirror

    Get PDF
    Abstract In the computer vision field, the reconstruction of target surfaces is usually achieved by using 3D optical scanners assembled integrating digital cameras and light emitters. However, these solutions are limited by the low field of view, which requires multiple acquisition from different views to reconstruct complex free-form geometries. The combination of mirrors and lenses (catadioptric systems) can be adopted to overcome this issue. In this work, a stereo catadioptric optical scanner has been developed by assembling two digital cameras, a spherical mirror and a multimedia white light projector. The adopted configuration defines a non-single viewpoint system, thus a non-central catadioptric camera model has been developed. An analytical solution to compute the projection of a scene point onto the image plane (forward projection) and vice-versa (backward projection) is presented. The proposed optical setup allows omnidirectional stereo vision thus allowing the reconstruction of target surfaces with a single acquisition. Preliminary results, obtained measuring a hollow specimen, demonstrated the effectiveness of the described approach

    Calibration and Reconstruction in Non-Central Axial Catadioptric Systems

    Get PDF
    Tese de doutoramento em Engenharia Electrotécnica e de Computadores, no ramo de Automação e Robótica, apresentada ao Departamento de Engenharia Eletrotécnica e de Computadores da Faculdade de Ciências e Tecnologia da Universidade de CoimbraEsta tese de doutoramento estuda sistemas de visão axiais catadióptricos nãocentrais, ou seja, sistemas com um espelho de simetria axial e uma câmara pinhole com o centro ótico pertencente ao eixo do espelho. São propostos métodos originais para calibração e reconstrução 3D usando a imagem de pontos e retas. Por “calibração” entende-se a reconstrução da geometria do sistema de visão, em termos da forma do espelho e da posição e orientação relativa camera/espelho. Para além disso, também se pretende estimar a pose da câmara em relação ao sistema de coordenadas do mundo, ou seja, a estimação dos parâmetros extrínsecos. Assume-se que a câmara pinhole está calibrada internamente a priori. Os algoritmos baseiam-se na utilização da imagem de um padrão de calibração planar, por exemplo, um padrão em xadrez. São propostos cinco algoritmos distintos. Um método estima a posição do eixo do espelho na imagem (de modo a determinar a orientação relativa câmara/ espelho) usando a invariância do cross-ratio. Outro método estima os parâmetros extrínsecos e a distância câma-ra/espelho, dado o conhecimento da forma do espelho. Baseia-se no estabelecimento de uma relação linear 3D/1D entre pontos do mundo e elementos da imagem, e na utilização do algoritmo Direct-Linear-Transformation (DLT) de modo a determinar um subconjunto dos parâmetros do sistema. Os parâmetros restantes são estimados usando procedimentos de otimização não-linear, numa variável de cada vez. Como uma extensão ao método anterior, também é proposta a estimação da forma do espelho como parte do processo de calibração. Este método utiliza a imagem de pontos e retas. Aproveita o facto de que todos os pontos num círculo da imagem centrado na origem possuem raios de retroprojeção que se intersetam num único ponto, formando um sistema de projeção central. Também é proposto um algoritmo para o caso particular de sistemas catadióptricos com espelhos esféricos, onde a calibração é alcançada através do ajuste de curvas quárticas às imagens de retas de um padrão de calibração. É derivada uma solução analítica, que é seguidamente refinada através de um procedimento de otimização não-linear. v Finalmente, considerando o caso de um sistema axial catadióptrico completamente calibrado, é feita a reconstrução da posição 3D de uma reta através de uma única imagem dessa mesma reta (que é possível devido ao facto de o sistema ser não-central). A reta é reconstruída a partir de 3 ou mais pontos na imagem, conhecendo o rácio da distância entre 3 pontos na reta (o que é uma assunção admissível em, por exemplo, ambientes estruturados com objetos arquitetónicos repetitivos, como janelas ou ladrilhos). É usada a invariância do cross-ratio de modo a restringir a localização da reta e, seguidamente, é feita a reconstrução a partir de um conjunto de pontos na imagem através de otimização não-linear. São apresentadas experiências com imagens reais e simuladas de modo a avaliar a precisão e robustez dos métodos.This PhD thesis focuses on non-central axial catadioptric vision systems, i.e. systems with an axial symmetrical mirror and a pinhole camera with its optical center located on the mirror axis. We propose novel methods to achieve calibration and 3D reconstruction from the image of points and lines. By “calibration” we mean the reconstruction of the vision system geometry, in terms of mirror shape and mirror/camera relative position and orientation. We also aim at the estimation of the pose of the camera w.r.t. the world coordinates frame, i.e. the estimation of the extrinsic parameters. We assume that the pinhole camera is internally calibrated a priori. The algorithms rely on the image of a planar calibration pattern, e.g. a checkerboard. We propose five distinct algorithms. One method aims at estimating the position of the mirror axis in the image (to determine camera/mirror relative orientation) using the cross-ratio as an invariant. Another method estimates the extrinsic parameters and camera/mirror distance given the knowledge of the mirror shape. It relies on establishing a 3D/1D linear relation between world points and image features, and using the Direct- Linear-Transformation (DLT) algorithm to obtain a subset of the system parameters. The remaining parameters are estimated using non-linear optimization, on a single variable at a time. As an extension to the previous method, we propose the estimation of the mirror shape as part of the calibration process. This method requires the image of points and lines. It uses the fact that all points in any image circle centered at the origin have backprojection rays that intersect at a single point, effectively becoming a central projection system. We also propose an algorithm for the particular case of catadioptric systems with spherical mirrors, where the calibration is achieved by fitting quartic curves to the images of lines in a calibration pattern. An analytical solution is derived, which is later refined by a non-linear optimization procedure. Finally, we consider the case of a fully calibrated non-central axial catadioptric system, and aim at the reconstruction of the 3D position of a line from a single vii image of that line (which is possible because the system is non-central). The line is reconstructed from 3 or more image points, given the knowledge of the distance ratio of 3 points in the line (a fair assumption in, for example, structured environments with repetitive architectural features, like windows or tiles). We use cross-ratio as an invariant to constrain the line localization and then perform the reconstruction from a set of image points through non-linear optimization. Experiments with simulated and real images are performed to evaluate the accuracy and robustness of the methods.FCT - PROTEC SFRH/BD/50281/200

    Single View 3D Reconstruction under an Uncalibrated Camera and an Unknown Mirror Sphere

    Get PDF
    In this paper, we develop a novel self-calibration method for single view 3D reconstruction using a mirror sphere. Unlike other mirror sphere based reconstruction methods, our method needs neither the intrinsic parameters of the camera, nor the position and radius of the sphere be known. Based on eigen decomposition of the matrix representing the conic image of the sphere and enforcing a repeated eignvalue constraint, we derive an analytical solution for recovering the focal length of the camera given its principal point. We then introduce a robust algorithm for estimating both the principal point and the focal length of the camera by minimizing the differences between focal lengths estimated from multiple images of the sphere. We also present a novel approach for estimating both the principal point and focal length of the camera in the case of just one single image of the sphere. With the estimated camera intrinsic parameters, the position(s) of the sphere can be readily retrieved from the eigen decomposition(s) and a scaled 3D reconstruction follows. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our approach. © 2016 IEEE.postprin

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Camera Calibration with Non-Central Local Camera Models

    Get PDF
    Kamerakalibrierung ist eine wichtige Grundvoraussetzung für viele Computer-Vision-Algorithmen wie Stereo-Vision und visuelle Odometrie. Das Ziel der Kamerakalibrierung besteht darin, sowohl die örtliche Lage der Kameras als auch deren Abbildungsmodell zu bestimmen. Das Abbildungsmodell einer Kamera beschreibt den Zusammenhang zwischen der 3D-Welt und der Bildebene. Aktuell werden häufig einfache globale Kamera-Modelle in einem Kalibrierprozess geschätzt, welcher mit vergleichsweise geringem Aufwand und einer großen Fehlertoleranz durchgeführt werden kann. Um das resultierende Kameramodell zu bewerten, wird in der Regel der Rückprojektionsfehler als Maß herangezogen. Jedoch können auch einfache Kameramodelle, die das Abbildungsverhalten von optischen Systemen nicht präzise beschreiben können, niedrige Rückprojektionsfehler erzielen. Dies führt dazu, dass immer wieder schlecht kalibrierte Kameramodelle nicht als solche identifiziert werden. Um dem entgegen zu wirken, wird in dieser Arbeit ein neues kontinuierliches nicht-zentrales Kameramodell basierend auf B-Splines vorgeschlagen. Dieses Abbildungsmodell ermöglicht es, verschiedene Objektive und nicht-zentrale Verschiebungen, die zum Beispiel durch eine Platzierung der Kamera hinter einer Windschutzscheibe entstehen, akkurat abzubilden. Trotz der allgemeinen Modellierung kann dieses Kameramodell durch einen einfach zu verwendenden Schachbrett-Kalibrierprozess geschätzt werden. Um Kalibrierergebnisse zu bewerten, wird anstelle des mittleren Rückprojektionsfehlers ein Kalibrier-Benchmark vorgeschlagen. Die Grundwahrheit des Kameramodells wird durch ein diskretes Sichtstrahlen-basiertes Modell beschrieben. Um dieses Modell zu schätzen, wird ein Kalibrierprozess vorgestellt, welches ein aktives Display als Ziel verwendet. Dabei wird eine lokale Parametrisierung für die Sichtstrahlen vorgestellt und ein Weg aufgezeigt, die Oberfläche des Displays zusammen mit den intrinsischen Kameraparametern zu schätzen. Durch die Schätzung der Oberfläche wird der mittlere Punkt-zu-Linien-Abstand um einen Faktor von mehr als 20 reduziert. Erst dadurch kann das so geschätzte Kameramodell als Grundwahrheit dienen. Das vorgeschlagene Kameramodell und die dazugehörigen Kalibrierprozesse werden durch eine ausführliche Auswertung in Simulation und in der echten Welt mithilfe des neuen Kalibrier-Benchmarks bewertet. Es wird gezeigt, dass selbst in dem vereinfachten Fall einer ebenen Glasscheibe, die vor der Kamera platziert ist, das vorgeschlagene Modell sowohl einem zentralen als auch einem nicht-zentralen globalen Kameramodell überlegen ist. Am Ende wird die Praxistauglichkeit des vorgeschlagenen Modells bewiesen, indem ein automatisches Fahrzeug kalibriert wird, das mit sechs Kameras ausgestattet ist, welche in unterschiedliche Richtungen zeigen. Der mittlere Rückprojektionsfehler verringert sich durch das neue Modell bei allen Kameras um den Faktor zwei bis drei. Der Kalibrier-Benchmark ermöglicht es in Zukunft, die Ergebnisse verschiedener Kalibrierverfahren miteinander zu vergleichen und die Genauigkeit des geschätzten Kameramodells mithilfe der Grundwahrheit akkurat zu bestimmen. Die Verringerung des Kalibrierfehlers durch das neue vorgeschlagene Kameramodell hilft die Genauigkeit weiterführender Algorithmen wie Stereo-Vision, visuelle Odometrie oder 3D-Rekonstruktion zu erhöhen

    Calibration of non-conventional imaging systems

    Get PDF

    Das "Surface Model" – Eine unsichere kontinuierliche Repräsentation des generischen Kameramodells und dessen Kalibrierung

    Get PDF
    Using digital cameras for measurement purposes requires the knowledge of the mapping between 3D world points and 2D positions on the image plane. There are many different mathematical models that provide this mapping for a specific imaging system. Grossberg and Nayar proposed a discrete generic camera model, which does not make any assumptions about the structure of this system. The model describes a digital camera by assigning an arbitrary viewing ray to each pixel of the camera image. This makes the model applicable to any kind of camera, especially also to non-central ones like onmidirectional catadioptrics. However, this model is difficult to use in practice, as there is no direct method for mapping a 3D point to the image or determining rays for subpixel image positions. In this work, the Surface Model, an uncertain continuous representation of the generic camera model, will be introduced. It uses a spline surface in 6D Plücker space to describe the camera. The interpolation abilities of the spline surface allow the viewing ray and its uncertainty for any (subpixel) position to be easily determined. Furthermore, the description facilitates the mapping from 3D world points to the image. The calibration of the generic model has to be performed pixel-wise and is technically involved and time-consuming. In this work, hand-held sparse planar chessboard patterns are used for calibration. The uncertainties of the corresponding image point measurements are taken into account and propagated during the complete calibration procedure to obtain an uncertain model. Simulations prove the validity of each step and the practical applicability of the procedure is shown by calibrating several real cameras of different types.Um digitale Kameras zu Vermessungszwecken einzusetzen muss der mathematische Zusammenhang zwischen 3D Weltpunkten und 2D Bildpunkten bekannt sein. Es existiert eine Vielzahl an mathematischen Modellen, welche diese Abbildung für spezifische Kamerasysteme beschreiben. Für deren Gültigkeit ist die Einhaltung der zugehörigen Randbedingungen, beispielsweise die hochgenaue Ausrichtung von Bildsensor, Linsen und Spiegeln, zwingend erforderlich. Andernfalls können fehlerhafte Messergebnisse die Folge sein. Um diese Problematik zu meiden, haben Grossberg und Nayar ein diskretes generisches Kameramodell vorgeschlagen, welches jedem einzelnen Pixel einen separaten Sehstrahl zuordnet. Somit kann jede erdenkliche Kamera beschrieben werden. Dies gilt auch für omnidirektionale catadioptrische Systeme, welche oftmals kein punktförmiges optisches Zentrum besitzen. Jedoch kann weder für jede beliebige Subpixel-Position ein Sehstrahl ermittelt werden, noch ist die Projektion eines beliebigen 3D-Punktes ins Kamerabild ohne weiteres möglich. In dieser Arbeit wird das "Surface Model" vorgestellt. Es dient als eine kontinuierliche Repräsentation des generischen Kameramodells, welche Modellunsicherheiten explizit berücksichtigt. Zur mathematischen Beschreibung wird eine Splineoberfläche im 6D Plücker-Raum genutzt. Deren Interpolationsfähigkeiten erlauben es, für jedwede Subpixel-Position direkt einen Sehstrahl zu ermitteln, sowie einen beliebigen 3D-Punkt ins Kamerabild zu projizieren. Die Kalibrierung des diskreten generischen Modells erfordert mehrere Messungen für jeden einzelnen Pixel. Um diesen aufwändigen Prozess zu vereinfachen, werden in dieser Arbeit von Hand platzierte planare Schachbrettmuster eingesetzt. Während der Kalibrierung treten unweigerlich Messungenauigkeiten auf. Beim hier vorgestellten Verfahren zur Parameterermittlung des Surface Models werden diese Unsicherheiten explizit zur Stabilisierung und Verbesserung der Genauigkeit genutzt. Dies resultiert in einem unsicheren Kameramodell, welches für die Anwendungen der Sehstrahlermittlung und der Punktprojektion Ergebnisunsicherheiten in Form von Kovarianzmatrizen zur Verfügung stellt. Mittels Simulationen wird die Anwendbarkeit sämtlicher vorgestellter Verfahren validiert. Durch die Kalibrierung verschiedener realer Kameras wird darüber hinaus deren praktische Nutzbarkeit aufgezeigt

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes
    corecore