11 research outputs found

    Noncentral catadioptric systems with quadric mirrors : geometry and calibration

    Get PDF
    Tese de doutoramento em Engenharia Electrotécnica (Informática) apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraNesta dissertação de doutoramento estudamos e analisamos a geometria dos sistema catadióptricos não-centrais compostos por uma câmara pinhole ou ortográfica e um espelho curvo, cuja forma é uma quádrica não degenerada, incluindo elipsóides, que podem ser esferas, hiperbolóides e parabolóides. A geometria destes sistemas de visão é parameterizada, analisando o fenómeno de formação da imagem, e é composta pelos parâmetros intrínsecos da câmara, os parâmetros da superfície do espelho e a posição e orientação da câmara em relação ao espelho e ao sistema de referência do mundo. A formação da imagem é estudada numa perspectiva puramente geométrica, focando principalmente o modelo de projecção e a calibração do sistema de visão. As principais contribuições deste trabalho incluem a demonstração de que num sistema catadióptrico não-central com um câmara em perspectiva e uma quádrica não degenerada, o ponto de reflexão na superfície do espelho (projectando na imagem qualquer ponto 3D do mundo) pertence a uma curva quártica que é dada pela intersecção de duas superfícies quádricas. O correspondente modelo de projecção é também desenvolvido e é expresso através de uma equação não linear implícita, dependente de um único parâmetro. Relativamente `a calibração destes sistemas de visão, foi desenvolvido um método de calibração, assumindo o conhecimento dos parâmetros intrínsecos da câmara em perspectiva e de um conjunto de pontos 3D expressos em coordenadas locais (estrutura 3D do mundo). Informação acerca do contorno aparente do espelho é também usada para melhorar a precisão da estimação. Um outro método de calibração é proposto, assumindo uma calibração prévia do sistema no sentido de um modelo geral de câmara (correspondências entre pontos na imagem e raios incidentes no espaço). Adicionalmente, a posição e orientação (pose) da câmara em relação ao espelho e ao sistema de referência do mundo são estimadas usando métricas algébricas e equações lineares (escritas para um método de calibração que também é apresentado). Considera-se a câmara como pré-calibrada. São desenvolvidas e apresentadas experiências com simulações extensivas e também com imagens reais de forma a testar a robustez e precisão dos métodos apresentados. As principais conclusões apontam para o facto de estes sistemas de visão serem altamente não lineares e a sua calibração ser possível com boa precisão, embora difícil de alcançar com precisão muito elevada, especialmente se o sistema de visão tem como objectivo aplicações direccionadas para a precisão. Apesar disso, pode observar-se que a informação da estrutura do mundo pode ser complementada com informação adicional, tal como o contorno aparente da quádrica, de forma a melhorar a qualidade dos resultados de calibração. Na verdade, o uso do contorno aparente do espelho pode, por si, melhorar drasticamente a precisão da estimação.In this PhD thesis we study and analyze the geometry of noncentral catadioptric systems composed by a pinhole or orthographic camera and a non-ruled quadric shaped mirror, that is to say an ellipsoid, which can be a sphere, a hyperboloid or a paraboloid surface. The geometry of these vision systems is parameterized by analyzing the image formation and is composed by the intrinsic parameters of the camera, the parameters of the mirror surface and the poses of the camera in relation to the mirror and to the world reference frames. Image formation is studied in a purely geometrical way, focusing mainly on the projection model and on the calibration of the vision system. The main contributions include the proof that in a noncentral catadioptric system with a perspective camera and a non degenerate quadric the reflection point on the surface (projecting any given 3D world point to the image) is on the quartic curve that is the intersection of two quadrics. The projection model related to the previous definition of the reflection point is also derived and is expressed as an implicit non linear function on a single unknown. In what concerns the calibration of these vision systems, we developed a calibration method assuming the knowledge of the intrinsic parameters of the perspective camera and of some 3D points in a local reference frame (structure) . Information about the apparent contour is also used to enhance the accuracy of the estimation. Another calibration method is proposed, assuming a previous calibration of the system in the sense of a general camera model (correspondences between image points and incident lines in space). Additionally, the camera-mirror and camera-world poses are estimated using algebraic metrics and linear equations (derived for a calibration method that is also presented). The camera is considered to be pre-calibrated. Experiments with extensive simulations and also using real images are performed to test the robustness and accuracy of the methods presented. The main conclusions are that these vision systems are highly non linear and that their calibration is possible with good accuracy but difficult to achieve with very high accuracy, specially if the vision system is aimed at being used for accuracy-driven applications. Nevertheless it is observed that structure of the world can be complemented with some additional information as the quadric apparent contour in order to improve the quality of the calibration results. Actually, the use of the apparent contour can dramatically improve the accuracy of the estimation

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Metric and appearance based visual SLAM for mobile robots

    Get PDF
    Simultaneous Localization and Mapping (SLAM) maintains autonomy for mobile robots and it has been studied extensively during the last two decades. It is the process of building the map of an unknown environment and determining the location of the robot using this map concurrently. Different kinds of sensors such as Global Positioning System (GPS), Inertial Measurement Unit (IMU), laser range finder and sonar are used for data acquisition in SLAM. In recent years, passive visual sensors are utilized in visual SLAM (vSLAM) problem because of their increasing ubiquity. This thesis is concerned with the metric and appearance-based vSLAM problems for mobile robots. From the point of view of metric-based vSLAM, a performance improvement technique is developed. Template matching based video stabilization and Harris corner detector are integrated. Extracting Harris corner features from stabilized video consistently increases the accuracy of the localization. Data coming from a video camera and odometry are fused in an Extended Kalman Filter (EKF) to determine the pose of the robot and build the map of the environment. Simulation results validate the performance improvement obtained by the proposed technique. Moreover, a visual perception system is proposed for appearance-based vSLAM and used for under vehicle classification. The proposed system consists of three main parts: monitoring, detection and classification. In the first part a new catadioptric camera system, where a perspective camera points downwards to a convex mirror mounted to the body of a mobile robot, is designed. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part speeded up robust features (SURF) are used to detect the hidden objects that are under vehicles. Fast appearance based mapping algorithm (FAB-MAP) is then exploited for the classification of the means of transportations in the third part. Experimental results show the feasibility of the proposed system. The proposed solution is implemented using a non-holonomic mobile robot. In the implementations the bottom of the tables in the laboratory are considered as the under vehicles. A database that includes di erent under vehicle images is used. All the algorithms are implemented in Microsoft Visual C++ and OpenCV 2.4.4

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Full text link
    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances

    Calibration and Reconstruction in Non-Central Axial Catadioptric Systems

    Get PDF
    Tese de doutoramento em Engenharia Electrotécnica e de Computadores, no ramo de Automação e Robótica, apresentada ao Departamento de Engenharia Eletrotécnica e de Computadores da Faculdade de Ciências e Tecnologia da Universidade de CoimbraEsta tese de doutoramento estuda sistemas de visão axiais catadióptricos nãocentrais, ou seja, sistemas com um espelho de simetria axial e uma câmara pinhole com o centro ótico pertencente ao eixo do espelho. São propostos métodos originais para calibração e reconstrução 3D usando a imagem de pontos e retas. Por “calibração” entende-se a reconstrução da geometria do sistema de visão, em termos da forma do espelho e da posição e orientação relativa camera/espelho. Para além disso, também se pretende estimar a pose da câmara em relação ao sistema de coordenadas do mundo, ou seja, a estimação dos parâmetros extrínsecos. Assume-se que a câmara pinhole está calibrada internamente a priori. Os algoritmos baseiam-se na utilização da imagem de um padrão de calibração planar, por exemplo, um padrão em xadrez. São propostos cinco algoritmos distintos. Um método estima a posição do eixo do espelho na imagem (de modo a determinar a orientação relativa câmara/ espelho) usando a invariância do cross-ratio. Outro método estima os parâmetros extrínsecos e a distância câma-ra/espelho, dado o conhecimento da forma do espelho. Baseia-se no estabelecimento de uma relação linear 3D/1D entre pontos do mundo e elementos da imagem, e na utilização do algoritmo Direct-Linear-Transformation (DLT) de modo a determinar um subconjunto dos parâmetros do sistema. Os parâmetros restantes são estimados usando procedimentos de otimização não-linear, numa variável de cada vez. Como uma extensão ao método anterior, também é proposta a estimação da forma do espelho como parte do processo de calibração. Este método utiliza a imagem de pontos e retas. Aproveita o facto de que todos os pontos num círculo da imagem centrado na origem possuem raios de retroprojeção que se intersetam num único ponto, formando um sistema de projeção central. Também é proposto um algoritmo para o caso particular de sistemas catadióptricos com espelhos esféricos, onde a calibração é alcançada através do ajuste de curvas quárticas às imagens de retas de um padrão de calibração. É derivada uma solução analítica, que é seguidamente refinada através de um procedimento de otimização não-linear. v Finalmente, considerando o caso de um sistema axial catadióptrico completamente calibrado, é feita a reconstrução da posição 3D de uma reta através de uma única imagem dessa mesma reta (que é possível devido ao facto de o sistema ser não-central). A reta é reconstruída a partir de 3 ou mais pontos na imagem, conhecendo o rácio da distância entre 3 pontos na reta (o que é uma assunção admissível em, por exemplo, ambientes estruturados com objetos arquitetónicos repetitivos, como janelas ou ladrilhos). É usada a invariância do cross-ratio de modo a restringir a localização da reta e, seguidamente, é feita a reconstrução a partir de um conjunto de pontos na imagem através de otimização não-linear. São apresentadas experiências com imagens reais e simuladas de modo a avaliar a precisão e robustez dos métodos.This PhD thesis focuses on non-central axial catadioptric vision systems, i.e. systems with an axial symmetrical mirror and a pinhole camera with its optical center located on the mirror axis. We propose novel methods to achieve calibration and 3D reconstruction from the image of points and lines. By “calibration” we mean the reconstruction of the vision system geometry, in terms of mirror shape and mirror/camera relative position and orientation. We also aim at the estimation of the pose of the camera w.r.t. the world coordinates frame, i.e. the estimation of the extrinsic parameters. We assume that the pinhole camera is internally calibrated a priori. The algorithms rely on the image of a planar calibration pattern, e.g. a checkerboard. We propose five distinct algorithms. One method aims at estimating the position of the mirror axis in the image (to determine camera/mirror relative orientation) using the cross-ratio as an invariant. Another method estimates the extrinsic parameters and camera/mirror distance given the knowledge of the mirror shape. It relies on establishing a 3D/1D linear relation between world points and image features, and using the Direct- Linear-Transformation (DLT) algorithm to obtain a subset of the system parameters. The remaining parameters are estimated using non-linear optimization, on a single variable at a time. As an extension to the previous method, we propose the estimation of the mirror shape as part of the calibration process. This method requires the image of points and lines. It uses the fact that all points in any image circle centered at the origin have backprojection rays that intersect at a single point, effectively becoming a central projection system. We also propose an algorithm for the particular case of catadioptric systems with spherical mirrors, where the calibration is achieved by fitting quartic curves to the images of lines in a calibration pattern. An analytical solution is derived, which is later refined by a non-linear optimization procedure. Finally, we consider the case of a fully calibrated non-central axial catadioptric system, and aim at the reconstruction of the 3D position of a line from a single vii image of that line (which is possible because the system is non-central). The line is reconstructed from 3 or more image points, given the knowledge of the distance ratio of 3 points in the line (a fair assumption in, for example, structured environments with repetitive architectural features, like windows or tiles). We use cross-ratio as an invariant to constrain the line localization and then perform the reconstruction from a set of image points through non-linear optimization. Experiments with simulated and real images are performed to evaluate the accuracy and robustness of the methods.FCT - PROTEC SFRH/BD/50281/200

    An Efficient Calibration Method for a Stereo Camera System with Heterogeneous Lenses Using an Embedded Checkerboard Pattern

    Get PDF
    We present two simple approaches to calibrate a stereo camera setup with heterogeneous lenses: a wide-angle fish-eye lens and a narrow-angle lens in left and right sides, respectively. Instead of using a conventional black-white checkerboard pattern, we design an embedded checkerboard pattern by combining two differently colored patterns. In both approaches, we split the captured stereo images into RGB channels and extract R and inverted G channels from left and right camera images, respectively. In our first approach, we consider the checkerboard pattern as the world coordinate system and calculate left and right transformation matrices corresponding to it. We use these two transformation matrices to estimate the relative pose of the right camera by multiplying the inversed left transformation with the right. In the second approach, we calculate a planar homography transformation to identify common object points in left-right image pairs and treat them with the well-known Zhangs camera calibration method. We analyze the robustness of these two approaches by comparing reprojection errors and image rectification results. Experimental results show that the second method is more accurate than the first one

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multi-camera systems are being deployed in a variety of vehicles and mobile robots today. To eliminate the need for cost and labor intensive maintenance and calibration, continuous self-calibration is highly desirable. In this book we present such an approach for self-calibration of multi-Camera systems for vehicle surround sensing. In an extensive evaluation we assess our algorithm quantitatively using real-world data

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multikamerasysteme werden heute bereits in einer Vielzahl von Fahrzeugen und mobilen Robotern eingesetzt. Die Anwendungen reichen dabei von einfachen Assistenzfunktionen wie der Erzeugung einer virtuellen Rundumsicht bis hin zur Umfelderfassung, wie sie für teil- und vollautomatisches Fahren benötigt wird. Damit aus den Kamerabildern metrische Größen wie Distanzen und Winkel abgeleitet werden können und ein konsistentes Umfeldmodell aufgebaut werden kann, muss das Abbildungsverhalten der einzelnen Kameras sowie deren relative Lage zueinander bekannt sein. Insbesondere die Bestimmung der relativen Lage der Kameras zueinander, die durch die extrinsische Kalibrierung beschrieben wird, ist aufwendig, da sie nur im Gesamtverbund erfolgen kann. Darüber hinaus ist zu erwarten, dass es über die Lebensdauer des Fahrzeugs hinweg zu nicht vernachlässigbaren Veränderungen durch äußere Einflüsse kommt. Um den hohen Zeit- und Kostenaufwand einer regelmäßigen Wartung zu vermeiden, ist ein Selbstkalibrierungsverfahren erforderlich, das die extrinsischen Kalibrierparameter fortlaufend nachschätzt. Für die Selbstkalibrierung wird typischerweise das Vorhandensein überlappender Sichtbereiche ausgenutzt, um die extrinsische Kalibrierung auf der Basis von Bildkorrespondenzen zu schätzen. Falls die Sichtbereiche mehrerer Kameras jedoch nicht überlappen, lassen sich die Kalibrierparameter auch aus den relativen Bewegungen ableiten, die die einzelnen Kameras beobachten. Die Bewegung typischer Straßenfahrzeuge lässt dabei jedoch nicht die Bestimmung aller Kalibrierparameter zu. Um die vollständige Schätzung der Parameter zu ermöglichen, lassen sich weitere Bedingungsgleichungen, die sich z.B. aus der Beobachtung der Bodenebene ergeben, einbinden. In dieser Arbeit wird dazu in einer theoretischen Analyse gezeigt, welche Parameter sich aus der Kombination verschiedener Bedingungsgleichungen eindeutig bestimmen lassen. Um das Umfeld eines Fahrzeugs vollständig erfassen zu können, werden typischerweise Objektive, wie zum Beispiel Fischaugenobjektive, eingesetzt, die einen sehr großen Bildwinkel ermöglichen. In dieser Arbeit wird ein Verfahren zur Bestimmung von Bildkorrespondenzen vorgeschlagen, das die geometrischen Verzerrungen, die sich durch die Verwendung von Fischaugenobjektiven und sich stark ändernden Ansichten ergeben, berücksichtigt. Darauf aufbauend stellen wir ein robustes Verfahren zur Nachführung der Parameter der Bodenebene vor. Basierend auf der theoretischen Analyse der Beobachtbarkeit und den vorgestellten Verfahren stellen wir ein robustes, rekursives Kalibrierverfahren vor, das auf einem erweiterten Kalman-Filter aufbaut. Das vorgestellte Kalibrierverfahren zeichnet sich insbesondere durch die geringe Anzahl von internen Parametern, sowie durch die hohe Flexibilität hinsichtlich der einbezogenen Bedingungsgleichungen aus und basiert einzig auf den Bilddaten des Multikamerasystems. In einer umfangreichen experimentellen Auswertung mit realen Daten vergleichen wir die Ergebnisse der auf unterschiedlichen Bedingungsgleichungen und Bewegungsmodellen basierenden Verfahren mit den aus einer Referenzkalibrierung bestimmten Parametern. Die besten Ergebnisse wurden dabei durch die Kombination aller vorgestellten Bedingungsgleichungen erzielt. Anhand mehrerer Beispiele zeigen wir, dass die erreichte Genauigkeit ausreichend für eine Vielzahl von Anwendungen ist

    Automatic 3d modeling of environments (a sparse approach from images taken by a catadioptric camera)

    Get PDF
    La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques.The automatic 3d modeling of an environment using images is still an active topic in Computer Vision. Standard methods have three steps : moving a camera in the environment to take an image sequence, reconstructing the geometry of the environment, and applying a dense stereo method to obtain a surface model of the environment. In the second step, interest points are detected and matched in images, then camera poses and a sparse cloud of 3d points corresponding to the interest points are simultaneously estimated. In the third step, all pixels of images are used to reconstruct a surface of the environment, e.g. by estimating a dense cloud of 3d points. Here we propose to generate a surface directly from the sparse point cloud and its visibility information provided by the geometry reconstruction step. The advantages are low time and space complexities ; this is useful e.g. for obtaining compact models of large and complete environments like a city. To do so, a surface reconstruction method by sculpting 3d Delaunay triangulation of the reconstructed points is proposed.The visibility information is used to classify the tetrahedra in free-space and matter. Then a surface is extracted thanks to a greedy method and a minority of Steiner points. The 2-manifold constraint is enforced on the surface to allow standard surface post-processing such as denoising, refinement by photo-consistency optimization ... This method is also extended to the incremental case : each time a new key-frame is selected in the input video, new 3d points and camera pose are estimated, then the reconstructed surface is updated.We study the time complexity in both cases (incremental or not). In experiments, a low-cost catadioptric camera is used to generate textured 3d models for complete environments including buildings, ground, vegetation ... A drawback of our methods is that thin scene components cannot be correctly reconstructed, e.g. tree branches and electric posts.CLERMONT FD-Bib.électronique (631139902) / SudocSudocFranceF
    corecore