34 research outputs found

    Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System

    Get PDF
    Autonomously operating UAVs demand a fast localization for navigation, to actively explore unknown areas and to create maps. For pose estimation, many UAV systems make use of a combination of GPS receivers and inertial sensor units (IMU). However, GPS signal coverage may go down occasionally, especially in the close vicinity of objects, and precise IMUs are too heavy to be carried by lightweight UAVs. This and the high cost of high quality IMU motivate the use of inexpensive vision based sensors for localization using visual odometry or visual SLAM (simultaneous localization and mapping) techniques. The first contribution of this thesis is a more general approach to bundle adjustment with an extended version of the projective coplanarity equation which enables us to make use of omnidirectional multi-camera systems which may consist of fisheye cameras that can capture a large field of view with one shot. We use ray directions as observations instead of image points which is why our approach does not rely on a specific projection model assuming a central projection. In addition, our approach allows the integration and estimation of points at infinity, which classical bundle adjustments are not capable of. We show that the integration of far or infinitely far points stabilizes the estimation of the rotation angles of the camera poses. In its second contribution, we employ this approach to bundle adjustment in a highly integrated system for incremental pose estimation and mapping on light-weight UAVs. Based on the image sequences of a multi-camera system our system makes use of tracked feature points to incrementally build a sparse map and incrementally refines this map using the iSAM2 algorithm. Our system is able to optionally integrate GPS information on the level of carrier phase observations even in underconstrained situations, e.g. if only two satellites are visible, for georeferenced pose estimation. This way, we are able to use all available information in underconstrained GPS situations to keep the mapped 3D model accurate and georeferenced. In its third contribution, we present an approach for re-using existing methods for dense stereo matching with fisheye cameras, which has the advantage that highly optimized existing methods can be applied as a black-box without modifications even with cameras that have field of view of more than 180 deg. We provide a detailed accuracy analysis of the obtained dense stereo results. The accuracy analysis shows the growing uncertainty of observed image points of fisheye cameras due to increasing blur towards the image border. Core of the contribution is a rigorous variance component estimation which allows to estimate the variance of the observed disparities at an image point as a function of the distance of that point to the principal point. We show that this improved stochastic model provides a more realistic prediction of the uncertainty of the triangulated 3D points.Autonom operierende UAVs benötigen eine schnelle Lokalisierung zur Navigation, zur Exploration unbekannter Umgebungen und zur Kartierung. Zur Posenbestimmung verwenden viele UAV-Systeme eine Kombination aus GPS-Empfängern und Inertial-Messeinheiten (IMU). Die Verfügbarkeit von GPS-Signalen ist jedoch nicht überall gewährleistet, insbesondere in der Nähe abschattender Objekte, und präzise IMUs sind für leichtgewichtige UAVs zu schwer. Auch die hohen Kosten qualitativ hochwertiger IMUs motivieren den Einsatz von kostengünstigen bildgebenden Sensoren zur Lokalisierung mittels visueller Odometrie oder SLAM-Techniken zur simultanen Lokalisierung und Kartierung. Im ersten wissenschaftlichen Beitrag dieser Arbeit entwickeln wir einen allgemeineren Ansatz für die Bündelausgleichung mit einem erweiterten Modell für die projektive Kollinearitätsgleichung, sodass auch omnidirektionale Multikamerasysteme verwendet werden können, welche beispielsweise bestehend aus Fisheyekameras mit einer Aufnahme einen großen Sichtbereich abdecken. Durch die Integration von Strahlrichtungen als Beobachtungen ist unser Ansatz nicht von einem kameraspezifischen Abbildungsmodell abhängig solange dieses der Zentralprojektion folgt. Zudem erlaubt unser Ansatz die Integration und Schätzung von unendlich fernen Punkten, was bei klassischen Bündelausgleichungen nicht möglich ist. Wir zeigen, dass durch die Integration weit entfernter und unendlich ferner Punkte die Schätzung der Rotationswinkel der Kameraposen stabilisiert werden kann. Im zweiten Beitrag verwenden wir diesen entwickelten Ansatz zur Bündelausgleichung für ein System zur inkrementellen Posenschätzung und dünnbesetzten Kartierung auf einem leichtgewichtigen UAV. Basierend auf den Bildsequenzen eines Mulitkamerasystems baut unser System mittels verfolgter markanter Bildpunkte inkrementell eine dünnbesetzte Karte auf und verfeinert diese inkrementell mittels des iSAM2-Algorithmus. Unser System ist in der Lage optional auch GPS Informationen auf dem Level von GPS-Trägerphasen zu integrieren, wodurch sogar in unterbestimmten Situation - beispielsweise bei nur zwei verfügbaren Satelliten - diese Informationen zur georeferenzierten Posenschätzung verwendet werden können. Im dritten Beitrag stellen wir einen Ansatz zur Verwendung existierender Methoden für dichtes Stereomatching mit Fisheyekameras vor, sodass hoch optimierte existierende Methoden als Black Box ohne Modifzierungen sogar mit Kameras mit einem Gesichtsfeld von mehr als 180 Grad verwendet werden können. Wir stellen eine detaillierte Genauigkeitsanalyse basierend auf dem Ergebnis des dichten Stereomatchings dar. Die Genauigkeitsanalyse zeigt, wie stark die Genauigkeit beobachteter Bildpunkte bei Fisheyekameras zum Bildrand aufgrund von zunehmender Unschärfe abnimmt. Das Kernstück dieses Beitrags ist eine Varianzkomponentenschätzung, welche die Schätzung der Varianz der beobachteten Disparitäten an einem Bildpunkt als Funktion von der Distanz dieses Punktes zum Hauptpunkt des Bildes ermöglicht. Wir zeigen, dass dieses verbesserte stochastische Modell eine realistischere Prädiktion der Genauigkeiten der 3D Punkte ermöglicht

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Konforme geometrische Algebra und deren Anwendungen auf stochastische Optimierungsprobleme im Bereich 3D-Vision

    Get PDF
    In the present work, the modeling capabilities of conformal geometric algebra (CGA) are harnessed to approach typical problems from the research field of 3D-vision. This increasingly popular methodology is then extended in a new fashion by the integration of a least squares technique into the framework of CGA. Specifically, choosing the linear Gauss-Helmert model as the basis, the most general variant of least squares adjustment can be brought into operation. The result is a new versatile parameter estimation, termed GH-method, that reconciles two different mathematical areas, that is algebra and stochastics, under the umbrella of geometry. The main concern of the thesis is to show up the advantages inhering with this combination. Monocular pose estimation, from the subject 3D-vision, is the applicational focus of this thesis; given a picture of a scene, position and orientation of the image capturing vision system with respect to an external coordinate system define the pose. The developed parameter estimation technique is applied to different variants of this problem. Parameters are encoded by the algebra elements, called multivectors. They can be geometric objects as a circle, geometric operators as a rotation or likewise the pose. In the conducted pose experiments, observations are image pixels with associated uncertainties. The high accuracy achieved throughout all experiments confirms the competitiveness of the proposed estimation technique. Central to this work is also the consideration of omnidirectional vision using a paracatadioptric imaging sensor. It is demonstrated that CGA provides the ideal framework to model the related image formation. Two variants of the perspective pose estimation problem are adapted to the omnidirectional case. A new formalization of the epipolar geometry of two images in terms of CGA is developed, from which new insights into the structures behind the essential and the fundamental matrix, respectively, are drawn. Renowned standard approaches are shown to implicitly make use of CGA. Finally, an invocation of the GH-method for estimating epipoles is presented. Experimental results substantiate the goodness of this approach. Next to the detailed elucidations on parameter estimation, this text also gives a comprehensive introduction to geometric algebra, its tensor representation, the conformal space and the respective conformal geometric algebra. A valuable contribution is especially the analytic investigation into the geometric capabilities of CGA.Die vorliegende Arbeit ist motiviert durch die im Forschungszweig Computer Vision (CV) der Informatik typisch auftretenden geometrischen Problemstellungen auf der Grundlage von digitalen Bildaufnahmen. Hierzu zählt die Berechnung einer optimal durch eine Menge von Bildpunkten verlaufende Kurve, die Bestimmung der Epipolargeometrie, das Schätzen der Pose eines Objektes oder die 3D-Rekonstruktion. Diese Klasse von Problemen lässt sich durch den Einsatz der geometrischen Algebra (GA) – so werden unter geometrischen Aspekten besonders interessante Clifford Algebren bezeichnet – in überaus prägnanter und geschlossener Form modellieren. Dieser mit wachsender Akzeptanz verfolgte Ansatz, der beständig durch den Lehrstuhl „Kognitive Systeme“ der Universität Kiel weiterentwickelt wird, ist zentraler Bestandteile der Dissertation. Speziell wird die „konforme geometrische Algebra“ (CGA), die auf einer nicht-linearen Einbettung des euklidischen 3D-Raumes in einen fünfdimensionalen projektiven konformen Raum beruht, eingesetzt. Die Elemente dieser Algebra erlauben die Repräsentation geometrischer Basisentitäten, im wesentlichen Punkte, Linien, Kreise, Kugeln und Ebenen. Eine Vielzahl von Operationen ist möglich; besonders interessant sind die Transformationen der enthaltenen konformen Gruppe sowie die Möglichkeit algebraisch mit Unterräumen zu rechnen, d.h. diese zu vergrößern, zu schneiden oder Inzidenzen abzufragen. Den zweiten wichtigen Bestandteil der Arbeit stellt ein für die oben genannten Problemstellungen typisches stochastischen Verfahren dar – die Ausgleichsrechnung nach der Methode der kleinsten Quadrate. Deren allgemeinste Form erwächst aus der Verwendung des aus der Geodäsie bekannten linearen Gauß-Helmert (GH) Modells. Der resultierende GH-Schätzer zeigt alle Optimalitätseigenschaften wie minimale Varianz und Erwartungstreue. Eine der geometrischen Algebra inhärente Tensordarstellung stellt eine geeignete numerische Schnittstelle zwischen CGA und der GH-Schätzmethode zur Verfügung. Aufgrund der Bilinearität des Algebraprodukts lässt sich so ebenfalls das Konzept der Fehlerfortpflanzung, ein wichtiges Instrument der Ausgleichsrechnung, mit hoher Genauigkeit auf die Operationen der Algebra ausdehnen. Im Ergebnis entsteht ein neues universelles Parameterschätzverfahren zur Bestimmung der des jeweiligen Problems zugrundeliegenden Variablen. Ziel der vorliegenden Arbeit ist es auch, die aus der Verbindung von Algebra und Stochastik entstehenden Vorteile anhand von typischen CV-Anwendungen herauszustellen. Den Schwerpunkt hierfür bildet die Schätzung der Pose (Position und Orientierung eines Objekts bezüglich eines objektfremden Koordinatensystems), z.B. die eines Roboters anhand eines vom Roboter aufgenommenen Kamerabildes. Es wird ebenfalls gezeigt, dass CGA den optimalen Rahmen zur Modellierung omnidirektionaler Bildgebungsverfahren bietet, falls diese auf einem katadioptrischen System mit parabolischem Spiegel beruhen. Als omnidirektionale Anwendungen werden Posenschätzung sowie die Bestimmung der Epipolargeometrie präsentiert. Die erreichte Güte der GH-Parameterschätzung in den einzelnen Anwendungen wird jeweils durch experimentell gewonnene Resultate untermauert. Neben den umfangreichen Ausführungen zur Parameterschätzung liefert diese Arbeit auch eine detaillierte Einführung und Herleitung der geometrischen Algebra. Besonderes Augenmerk ist auch auf die analytische Darlegung der konformen geometrischen Algebra zu richten

    Relative Pose Estimation Using Non-overlapping Multicamera Clusters

    Get PDF
    This thesis considers the Simultaneous Localization and Mapping (SLAM) problem using a set of perspective cameras arranged such that there is no overlap in their fields-of-view. With the known and fixed extrinsic calibration of each camera within the cluster, a novel real-time pose estimation system is presented that is able to accurately track the motion of a camera cluster relative to an unknown target object or environment and concurrently generate a model of the structure, using only image-space measurements. A new parameterization for point feature position using a spherical coordinate update is presented which isolates system parameters dependent on global scale, allowing the shape parameters of the system to converge despite the scale parameters remaining uncertain. Furthermore, a flexible initialization scheme is proposed which allows the optimization to converge accurately using only the measurements from the cameras at the first time step. An analysis is presented identifying the configurations of the cluster motions and target structure geometry for which the optimization solution becomes degenerate and the global scale is ambiguous. Results are presented that not only confirm the previously known critical motions for a two-camera cluster, but also provide a complete description of the degeneracies related to the point feature constellations. The proposed algorithms are implemented and verified in experiments with a camera cluster constructed using multiple perspective cameras mounted on a quadrotor vehicle and augmented with tracking markers to collect high-precision ground-truth motion measurements from an optical indoor positioning system. The accuracy and performance of the proposed pose estimation system are confirmed for various motion profiles in both indoor and challenging outdoor environments

    Noncentral catadioptric systems with quadric mirrors : geometry and calibration

    Get PDF
    Tese de doutoramento em Engenharia Electrotécnica (Informática) apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraNesta dissertação de doutoramento estudamos e analisamos a geometria dos sistema catadióptricos não-centrais compostos por uma câmara pinhole ou ortográfica e um espelho curvo, cuja forma é uma quádrica não degenerada, incluindo elipsóides, que podem ser esferas, hiperbolóides e parabolóides. A geometria destes sistemas de visão é parameterizada, analisando o fenómeno de formação da imagem, e é composta pelos parâmetros intrínsecos da câmara, os parâmetros da superfície do espelho e a posição e orientação da câmara em relação ao espelho e ao sistema de referência do mundo. A formação da imagem é estudada numa perspectiva puramente geométrica, focando principalmente o modelo de projecção e a calibração do sistema de visão. As principais contribuições deste trabalho incluem a demonstração de que num sistema catadióptrico não-central com um câmara em perspectiva e uma quádrica não degenerada, o ponto de reflexão na superfície do espelho (projectando na imagem qualquer ponto 3D do mundo) pertence a uma curva quártica que é dada pela intersecção de duas superfícies quádricas. O correspondente modelo de projecção é também desenvolvido e é expresso através de uma equação não linear implícita, dependente de um único parâmetro. Relativamente `a calibração destes sistemas de visão, foi desenvolvido um método de calibração, assumindo o conhecimento dos parâmetros intrínsecos da câmara em perspectiva e de um conjunto de pontos 3D expressos em coordenadas locais (estrutura 3D do mundo). Informação acerca do contorno aparente do espelho é também usada para melhorar a precisão da estimação. Um outro método de calibração é proposto, assumindo uma calibração prévia do sistema no sentido de um modelo geral de câmara (correspondências entre pontos na imagem e raios incidentes no espaço). Adicionalmente, a posição e orientação (pose) da câmara em relação ao espelho e ao sistema de referência do mundo são estimadas usando métricas algébricas e equações lineares (escritas para um método de calibração que também é apresentado). Considera-se a câmara como pré-calibrada. São desenvolvidas e apresentadas experiências com simulações extensivas e também com imagens reais de forma a testar a robustez e precisão dos métodos apresentados. As principais conclusões apontam para o facto de estes sistemas de visão serem altamente não lineares e a sua calibração ser possível com boa precisão, embora difícil de alcançar com precisão muito elevada, especialmente se o sistema de visão tem como objectivo aplicações direccionadas para a precisão. Apesar disso, pode observar-se que a informação da estrutura do mundo pode ser complementada com informação adicional, tal como o contorno aparente da quádrica, de forma a melhorar a qualidade dos resultados de calibração. Na verdade, o uso do contorno aparente do espelho pode, por si, melhorar drasticamente a precisão da estimação.In this PhD thesis we study and analyze the geometry of noncentral catadioptric systems composed by a pinhole or orthographic camera and a non-ruled quadric shaped mirror, that is to say an ellipsoid, which can be a sphere, a hyperboloid or a paraboloid surface. The geometry of these vision systems is parameterized by analyzing the image formation and is composed by the intrinsic parameters of the camera, the parameters of the mirror surface and the poses of the camera in relation to the mirror and to the world reference frames. Image formation is studied in a purely geometrical way, focusing mainly on the projection model and on the calibration of the vision system. The main contributions include the proof that in a noncentral catadioptric system with a perspective camera and a non degenerate quadric the reflection point on the surface (projecting any given 3D world point to the image) is on the quartic curve that is the intersection of two quadrics. The projection model related to the previous definition of the reflection point is also derived and is expressed as an implicit non linear function on a single unknown. In what concerns the calibration of these vision systems, we developed a calibration method assuming the knowledge of the intrinsic parameters of the perspective camera and of some 3D points in a local reference frame (structure) . Information about the apparent contour is also used to enhance the accuracy of the estimation. Another calibration method is proposed, assuming a previous calibration of the system in the sense of a general camera model (correspondences between image points and incident lines in space). Additionally, the camera-mirror and camera-world poses are estimated using algebraic metrics and linear equations (derived for a calibration method that is also presented). The camera is considered to be pre-calibrated. Experiments with extensive simulations and also using real images are performed to test the robustness and accuracy of the methods presented. The main conclusions are that these vision systems are highly non linear and that their calibration is possible with good accuracy but difficult to achieve with very high accuracy, specially if the vision system is aimed at being used for accuracy-driven applications. Nevertheless it is observed that structure of the world can be complemented with some additional information as the quadric apparent contour in order to improve the quality of the calibration results. Actually, the use of the apparent contour can dramatically improve the accuracy of the estimation

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Geodesic Active Fields:A Geometric Framework for Image Registration

    Get PDF
    Image registration is the concept of mapping homologous points in a pair of images. In other words, one is looking for an underlying deformation field that matches one image to a target image. The spectrum of applications of image registration is extremely large: It ranges from bio-medical imaging and computer vision, to remote sensing or geographic information systems, and even involves consumer electronics. Mathematically, image registration is an inverse problem that is ill-posed, which means that the exact solution might not exist or not be unique. In order to render the problem tractable, it is usual to write the problem as an energy minimization, and to introduce additional regularity constraints on the unknown data. In the case of image registration, one often minimizes an image mismatch energy, and adds an additive penalty on the deformation field regularity as smoothness prior. Here, we focus on the registration of the human cerebral cortex. Precise cortical registration is required, for example, in statistical group studies in functional MR imaging, or in the analysis of brain connectivity. In particular, we work with spherical inflations of the extracted hemispherical surface and associated features, such as cortical mean curvature. Spatial mapping between cortical surfaces can then be achieved by registering the respective spherical feature maps. Despite the simplified spherical geometry, inter-subject registration remains a challenging task, mainly due to the complexity and inter-subject variability of the involved brain structures. In this thesis, we therefore present a registration scheme, which takes the peculiarities of the spherical feature maps into particular consideration. First, we realize that we need an appropriate hierarchical representation, so as to coarsely align based on the important structures with greater inter-subject stability, before taking smaller and more variable details into account. Based on arguments from brain morphogenesis, we propose an anisotropic scale-space of mean-curvature maps, built around the Beltrami framework. Second, inspired by concepts from vision-related elements of psycho-physical Gestalt theory, we hypothesize that anisotropic Beltrami regularization better suits the requirements of image registration regularization, compared to traditional Gaussian filtering. Different objects in an image should be allowed to move separately, and regularization should be limited to within the individual Gestalts. We render the regularization feature-preserving by limiting diffusion across edges in the deformation field, which is in clear contrast to the indifferent linear smoothing. We do so by embedding the deformation field as a manifold in higher-dimensional space, and minimize the associated Beltrami energy which represents the hyperarea of this embedded manifold as measure of deformation field regularity. Further, instead of simply adding this regularity penalty to the image mismatch in lieu of the standard penalty, we propose to incorporate the local image mismatch as weighting function into the Beltrami energy. The image registration problem is thus reformulated as a weighted minimal surface problem. This approach has several appealing aspects, including (1) invariance to re-parametrization and ability to work with images defined on non-flat, Riemannian domains (e.g., curved surfaces, scalespaces), and (2) intrinsic modulation of the local regularization strength as a function of the local image mismatch and/or noise level. On a side note, we show that the proposed scheme can easily keep up with recent trends in image registration towards using diffeomorphic and inverse consistent deformation models. The proposed registration scheme, called Geodesic Active Fields (GAF), is non-linear and non-convex. Therefore we propose an efficient optimization scheme, based on splitting. Data-mismatch and deformation field regularity are optimized over two different deformation fields, which are constrained to be equal. The constraint is addressed using an augmented Lagrangian scheme, and the resulting optimization problem is solved efficiently using alternate minimization of simpler sub-problems. In particular, we show that the proposed method can easily compete with state-of-the-art registration methods, such as Demons. Finally, we provide an implementation of the fast GAF method on the sphere, so as to register the triangulated cortical feature maps. We build an automatic parcellation algorithm for the human cerebral cortex, which combines the delineations available on a set of atlas brains in a Bayesian approach, so as to automatically delineate the corresponding regions on a subject brain given its feature map. In a leave-one-out cross-validation study on 39 brain surfaces with 35 manually delineated gyral regions, we show that the pairwise subject-atlas registration with the proposed spherical registration scheme significantly improves the individual alignment of cortical labels between subject and atlas brains, and, consequently, that the estimated automatic parcellations after label fusion are of better quality

    Indoor Scene Understanding using Non-Conventional Cameras

    Get PDF
    Los seres humanos comprendemos los entornos que nos rodean sin esfuerzo y bajo una amplia variedad de condiciones, lo cual es debido principalmente a nuestra percepción visual. Desarrollar algoritmos de Computer Vision que logren una comprensión visual similar es muy deseable, para permitir que las máquinas puedan realizar tareas complejas e interactuar con el mundo real, con el principal objectivo de ayudar y entretener a los seres humanos. En esta tesis, estamos especialmente interesados en los problemas que surgen durante la búsqueda de la comprensión visual de espacios interiores, ya que es dónde los seres humanos pasamos la mayor parte de nuestro tiempo, así como en la búsqueda del sensor más adecuado para logar dicha comprensión. Con respecto a los sensores, en este trabajo proponemos utilizar cámaras no convencionales, en concreto imágenes panorámicas y sensores 3D. Con respecto a la comprensión de interiores, nos centramos en tres aspectos clave: estimación del diseño 3D de la escena (distribución de paredes, techo y suelo); detección, localización y segmentación de objetos; y modelado de objetos por categoría, para los que se proporcionan soluciones novedosas y eficientes. El enfoque de la tesis se centra en los siguientes desafíos subyacentes. En primer lugar, investigamos métodos de reconstrucción 3D de habitaciones a partir de una única imagen de 360, utilizado para lograr el nivel más alto de modelado y comprensión de la escena. Para ello combinamos ideas tradicionales, como la asunción del mundo Manhattan por la cual la escena se puede definir en base a tres direcciones principales ortogonales entre si, con técnicas de aprendizaje profundo, que nos permiten estimar probabilidades en la imagen a nivel de pixel para detectar los elementos estructurales de la habitación. Los modelos propuestos nos permiten estimar correctamente incluso partes de la habitación no visibles en la imágen, logrando reconstrucciones fieles a la realidad y generalizando por tanto a modelos de escena más complejos. Al mismo tiempo, se proponen nuevos métodos para trabajar con imágenes panorámicas, destacando la propuesta de una convolución especial que deforma el kernel para compensar las distorsiones de la proyección equirrectangular propia de dichas imágenes.En segundo lugar, considerando la importancia del contexto para la comprensión de la escena, estudiamos el problema de la localización y segmentación de objetos, adaptando el problema para aprovechar todo el potencial de las imágenes de 360360^\circ. También aprovechamos la interacción escena-objetos para elevar las detecciones 2D en la imagen de los objetos al modelo 3D de la habitación.La última línea de trabajo de esta tesis se centra en el análisis de la forma de los objetos directamente en 3D, trabajando con nubes de puntos. Para ello proponemos utilizar un modelado explícito de la deformación de los objetos e incluir una noción de la simetría de estos para aprender, de manera no supervisada, puntos clave de la geometría de los objetos que sean representativos de los mismos. Dichos puntos estan en correspondencia, tanto geométrica como semántica, entre todos los objetos de una misma categoría.Nuestros modelos avanzan el estado del arte en las tareas antes mencionadas, siendo evaluados cada uno de ellos en varios datasets y en los benchmarks correspondientes.<br /

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles
    corecore