8 research outputs found

    Parallel Lines for Calibration of Non-Central Conical Catadioptric Cameras

    Get PDF
    In this paper we propose a new calibration method for non-central catadioptric cameras that use a conical mirror. This method consists of using parallel lines, extracted from a single omnidirectional image, instead of using the typical checkerboard to obtain the calibration parameters of the system

    Exploiting line metric reconstruction from non-central circular panoramas

    Get PDF
    In certain non-central imaging systems, straight lines are projected via a non-planar surface encapsulating the 4 degrees of freedom of the 3D line. Consequently the geometry of the 3D line can be recovered from a minimum of four image points. However, with classical non-central catadioptric systems there is not enough effective baseline for a practical implementation of the method. In this paper we propose a multi-camera system configuration resembling the circular panoramic model which results in a particular non-central projection allowing the stitching of a non-central panorama. From a single panorama we obtain well-conditioned 3D reconstruction of lines, which are specially interesting in texture-less scenarios. No previous information about the direction or arrangement of the lines in the scene is assumed. The proposed method is evaluated on both synthetic and real images

    Fitting line projections in non-central catadioptric cameras with revolution symmetry

    Get PDF
    Line-images in non-central cameras contain much richer information of the original 3D line than line projections in central cameras. The projection surface of a 3D line in most catadioptric non-central cameras is a ruled surface, encapsulating the complete information of the 3D line. The resulting line-image is a curve which contains the 4 degrees of freedom of the 3D line. That means a qualitative advantage with respect to the central case, although extracting this curve is quite difficult. In this paper, we focus on the analytical description of the line-images in non-central catadioptric systems with symmetry of revolution. As a direct application we present a method for automatic line-image extraction for conical and spherical calibrated catadioptric cameras. For designing this method we have analytically solved the metric distance from point to line-image for non-central catadioptric systems. We also propose a distance we call effective baseline measuring the quality of the reconstruction of a 3D line from the minimum number of rays. This measure is used to evaluate the different random attempts of a robust scheme allowing to reduce the number of trials in the process. The proposal is tested and evaluated in simulations and with both synthetic and real images

    Atlanta scaled layouts from non-central panoramas

    Get PDF
    In this work we present a novel approach for 3D layout recovery of indoor environments using a non-central acquisition system. From a single non-central panorama, full and scaled 3D lines can be independently recovered by geometry reasoning without additional nor scale assumptions. However, their sensitivity to noise and complex geometric modeling has led these panoramas and required algorithms being little investigated. Our new pipeline aims to extract the boundaries of the structural lines of an indoor environment with a neural network and exploit the properties of non-central projection systems in a new geometrical processing to recover scaled 3D layouts. The results of our experiments show that we improve state-of-the-art methods for layout recovery and line extraction in non-central projection systems. We completely solve the problem both in Manhattan and Atlanta environments, handling occlusions and retrieving the metric scale of the room without extra measurements. As far as the authors’ knowledge goes, our approach is the first work using deep learning on non-central panoramas and recovering scaled layouts from single panoramas

    Learning the surroundings: 3D scene understanding from omnidirectional images

    Get PDF
    Las redes neuronales se han extendido por todo el mundo, siendo utilizadas en una gran variedad de aplicaciones. Estos métodos son capaces de reconocer música y audio, generar textos completos a partir de ideas simples u obtener información detallada y relevante de imágenes y videos. Las posibilidades que ofrecen las redes neuronales y métodos de aprendizaje profundo son incontables, convirtiéndose en la principal herramienta de investigación y nuevas aplicaciones en nuestra vida diaria. Al mismo tiempo, las imágenes omnidireccionales se están extendiendo dentro de la industria y nuestra sociedad, causando que la visión omnidireccional gane atención. A partir de imágenes 360 capturamos toda la información que rodea a la cámara en una sola toma.La combinación del aprendizaje profundo y la visión omnidireccional ha atraído a muchos investigadores. A partir de una única imagen omnidireccional se obtiene suficiente información del entorno para que una red neuronal comprenda sus alrededores y pueda interactuar con el entorno. Para aplicaciones como navegación y conducción autónoma, el uso de cámaras omnidireccionales proporciona información en torno del robot, person o vehículo, mientras que las cámaras convencionales carecen de esta información contextual debido a su reducido campo de visión. Aunque algunas aplicaciones pueden incluir varias cámaras convencionales para aumentar el campo de visión del sistema, tareas en las que el peso es importante (P.ej. guiado de personas con discapacidad visual o navegación de drones autónomos), un número reducido de dispositivos es altamente deseable.En esta tesis nos centramos en el uso conjunto de cámaras omnidireccionales, aprendizaje profundo, geometría y fotometría. Evaluamos diferentes enfoques para tratar con imágenes omnidireccionales, adaptando métodos a los modelos de proyección omnidireccionales y proponiendo nuevas soluciones para afrontar los retos de este tipo de imágenes. Para la comprensión de entornos interiores, proponemos una nueva red neuronal que obtiene segmentación semántica y mapas de profundidad de forma conjunta a partir de un único panoramaequirectangular. Nuestra red logra, con un nuevo enfoque convolucional, aprovechar la información del entorno proporcionada por la imagen panorámica y explotar la información combinada de semántica y profundidad. En el mismo tema, combinamos aprendizaje profundo y soluciones geométricas para recuperar el diseño estructural, junto con su escala, de entornos de interior a partir de un único panorama no central. Esta combinación de métodos proporciona una implementación rápida, debido a la red neuronal, y resultados precisos, gracias a lassoluciones geométricas. Además, también proponemos varios enfoques para la adaptación de redes neuronales a la distorsión de modelos de proyección omnidireccionales para la navegación y la adaptación del dominio soluciones previas. En términos generales, esta tesis busca encontrar soluciones novedosas e innovadoras para aprovechar las ventajas de las cámaras omnidireccionales y superar los desafíos que plantean.Neural networks have become widespread all around the world and are used for many different applications. These new methods are able to recognize music and audio, generate full texts from simple ideas and obtain detailed and relevant information from images and videos. The possibilities of neural networks and deep learning methods are uncountable, becoming the main tool for research and new applications in our daily-life. At the same time, omnidirectional and 360 images are also becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. From 360 images, we capture all the information surrounding the camera in a single shot. The combination of deep learning methods and omnidirectional computer vision have attracted many researchers to this new field. From a single omnidirectional image, we obtain enough information of the environment to make a neural network understand its surroundings and interact with the environment. For applications such as navigation and autonomous driving, the use of omnidirectional cameras provide information all around the robot, person or vehicle, while conventional perspective cameras lack this context information due to their narrow field of view. Even if some applications can include several conventional cameras to increase the system's field of view, tasks where weight is more important (i.e. guidance of visually impaired people or navigation of autonomous drones), the less cameras we need to include, the better. In this thesis, we focus in the joint use of omnidirectional cameras, deep learning, geometry and photometric methods. We evaluate different approaches to handle omnidirectional images, adapting previous methods to the distortion of omnidirectional projection models and also proposing new solutions to tackle the challenges of this kind of images. For indoor scene understanding, we propose a novel neural network that jointly obtains semantic segmentation and depth maps from single equirectangular panoramas. Our network manages, with a new convolutional approach, to leverage the context information provided by the panoramic image and exploit the combined information of semantics and depth. In the same topic, we combine deep learning and geometric solvers to recover the scaled structural layout of indoor environments from single non-central panoramas. This combination provides a fast implementation, thanks to the learning approach, and accurate result, due to the geometric solvers. Additionally, we also propose several approaches of network adaptation to the distortion of omnidirectional projection models for outdoor navigation and domain adaptation of previous solutions. All in all, this thesis looks for finding novel and innovative solutions to take advantage of omnidirectional cameras while overcoming the challenges they pose.<br /

    Applications de la vision omnidirectionnelle à la perception de scènes pour des systèmes mobiles

    Get PDF
    Ce mémoire présente une synthèse des travaux que j’ai menés à l’ESIGELEC au sein de son institut de recherche l’IRSEEM. Mes activités de recherche ont porté dans un premier temps sur la conception et l’évaluation de dispositifs de mesure de la dynamique de la marche de personnes atteintes de pathologies de la hanche, dans le cadre de ma thèse effectuée à l’université de Rouen en lien le Centre Hospitalo-Universitaire de Rouen. En 2003, j’ai rejoint les équipes de recherche qui se constituaient avec la mise sur pieds de l’IRSEEM, Institut de Recherche en Systèmes Electroniques Embarqués, créé en 2001. Dans ce laboratoire, j’ai structuré et développé une activité de recherche dans le domaine de la vision par ordinateur appliquée au véhicule intelligent et à la robotique mobile autonome. Dans un premier temps, j’ai concentré mes travaux à l’étude de systèmes de vision omnidirectionnelle tels que les capteurs catadioptriques centraux et leur utilisation pour des applications mobiles embarquées ou débarquées : modélisation et calibrage, reconstruction tridimensionnelle de scènes par stéréovision et déplacement du capteur. Dans un second temps, je me suis intéressé à la conception et la mise en œuvre de systèmes de vision à projection non centrale (capteurs catadioptriques à miroirs composés, caméra plénoptique). Ces travaux ont été effectués au travers en collaboration avec le MIS de l‘Université Picardie Jules Verne et l’ISIR de l’Université Pierre et Marie Curie. Enfin, dans le cadre d’un programme de recherche en collaboration avec l’Université du Kent, j’ai consacré une partie de mes travaux à l’adaptation de méthodes de traitement d’images et de classification pour la détection de visages sur images omnidirectionnelles (adaptation du détecteur de Viola et Jones) et à la reconnaissance biométrique d’une personne par analyse de sa marche. Aujourd’hui, mon activité s’inscrit dans le prolongement du renforcement des projets de l’IRSEEM dans le domaine de la robotique mobile et du véhicule autonome : mise en place d’un plateau de mesures pour la navigation autonome, coordination de projets de recherche en prise avec les besoins industriels. Mes perspectives de recherche ont pour objet l’étude de nouvelles solutions pour la perception du mouvement et la localisation en environnement extérieur et sur les méthodes et moyens nécessaires pour objectiver la performance et la robustesse de ces solutions sur des scénarios réalistes

    Design/cost tradeoff studies. Appendix A. Supporting analyses and tradeoffs, book 1. Earth Observatory Satellite system definition study (EOS)

    Get PDF
    A listing of the Earth Observatory Satellite (EOS) candidate missions is presented for use as a baseline in describing the EOS payloads. The missions are identified in terms of first, second, and third generation payloads. The specific applications of the EOS satellites are defined. The subjects considered are: (1) orbit analysis, (2) space shuttle interfaces, (3) thematic mapping subsystem, (4) high resolution pointable imager subsystem, (5) the data collection system, (6) the synthetic aperture radar, (7) the passive multichannel microwave radiometer, and (8) the wideband communications and handling equipment. Illustrations of the satellite and launch vehicle configurations are provided. Block diagrams of the electronic circuits are included
    corecore