746 research outputs found

    PROBE-GK: Predictive Robust Estimation using Generalized Kernels

    Full text link
    Many algorithms in computer vision and robotics make strong assumptions about uncertainty, and rely on the validity of these assumptions to produce accurate and consistent state estimates. In practice, dynamic environments may degrade sensor performance in predictable ways that cannot be captured with static uncertainty parameters. In this paper, we employ fast nonparametric Bayesian inference techniques to more accurately model sensor uncertainty. By setting a prior on observation uncertainty, we derive a predictive robust estimator, and show how our model can be learned from sample images, both with and without knowledge of the motion used to generate the data. We validate our approach through Monte Carlo simulations, and report significant improvements in localization accuracy relative to a fixed noise model in several settings, including on synthetic data, the KITTI dataset, and our own experimental platform.Comment: In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    An Improved Approach For Multi-Robot Localization

    Get PDF
    Cooperative multi-robot localization techniques use sensor measurements to estimate poses (locations, orientations) of robots relative to a given map of the environment. Existing approaches update a robot\u27s pose instantly whenever it detects another robot. However, such instant update may not be always necessary and effective, since both robots\u27 pose estimates could be highly uncertain at the time of the detection. In this thesis, we develop a new information exchange mechanism to collaborative multi-robot localization. We also propose a new scheme to calculate how much information is contained in a robot\u27s belief by using entropy. Instead of updating beliefs whenever detection occurs, our approach first compares the beliefs of the robots which are involved in the detection, and then decide whether the information exchange is necessary. Therefore, it avoids unnecessary information exchange whenever one robot perceives another robot. On the other hand, this approach does allow information exchange between detecting robots and such information exchange always contributes positively to the localization process, hence, improving the effectiveness and efficiency of multi-robot localization. The technique has been implemented and tested using two mobile robots as well as simulations. The results indicate significant improvements in localization speed and accuracy when compared to the single mobile robot localization

    Reliable localization methods for intelligent vehicles based on environment perception

    Get PDF
    Mención Internacional en el título de doctorIn the near past, we would see autonomous vehicles and Intelligent Transport Systems (ITS) as a potential future of transportation. Today, thanks to all the technological advances in recent years, the feasibility of such systems is no longer a question. Some of these autonomous driving technologies are already sharing our roads, and even commercial vehicles are including more Advanced Driver-Assistance Systems (ADAS) over the years. As a result, transportation is becoming more efficient and the roads are considerably safer. One of the fundamental pillars of an autonomous system is self-localization. An accurate and reliable estimation of the vehicle’s pose in the world is essential to navigation. Within the context of outdoor vehicles, the Global Navigation Satellite System (GNSS) is the predominant localization system. However, these systems are far from perfect, and their performance is degraded in environments with limited satellite visibility. Additionally, their dependence on the environment can make them unreliable if it were to change. Accordingly, the goal of this thesis is to exploit the perception of the environment to enhance localization systems in intelligent vehicles, with special attention to their reliability. To this end, this thesis presents several contributions: First, a study on exploiting 3D semantic information in LiDAR odometry is presented, providing interesting insights regarding the contribution to the odometry output of each type of element in the scene. The experimental results have been obtained using a public dataset and validated on a real-world platform. Second, a method to estimate the localization error using landmark detections is proposed, which is later on exploited by a landmark placement optimization algorithm. This method, which has been validated in a simulation environment, is able to determine a set of landmarks so the localization error never exceeds a predefined limit. Finally, a cooperative localization algorithm based on a Genetic Particle Filter is proposed to utilize vehicle detections in order to enhance the estimation provided by GNSS systems. Multiple experiments are carried out in different simulation environments to validate the proposed method.En un pasado no muy lejano, los vehículos autónomos y los Sistemas Inteligentes del Transporte (ITS) se veían como un futuro para el transporte con gran potencial. Hoy, gracias a todos los avances tecnológicos de los últimos años, la viabilidad de estos sistemas ha dejado de ser una incógnita. Algunas de estas tecnologías de conducción autónoma ya están compartiendo nuestras carreteras, e incluso los vehículos comerciales cada vez incluyen más Sistemas Avanzados de Asistencia a la Conducción (ADAS) con el paso de los años. Como resultado, el transporte es cada vez más eficiente y las carreteras son considerablemente más seguras. Uno de los pilares fundamentales de un sistema autónomo es la autolocalización. Una estimación precisa y fiable de la posición del vehículo en el mundo es esencial para la navegación. En el contexto de los vehículos circulando en exteriores, el Sistema Global de Navegación por Satélite (GNSS) es el sistema de localización predominante. Sin embargo, estos sistemas están lejos de ser perfectos, y su rendimiento se degrada en entornos donde la visibilidad de los satélites es limitada. Además, los cambios en el entorno pueden provocar cambios en la estimación, lo que los hace poco fiables en ciertas situaciones. Por ello, el objetivo de esta tesis es utilizar la percepción del entorno para mejorar los sistemas de localización en vehículos inteligentes, con una especial atención a la fiabilidad de estos sistemas. Para ello, esta tesis presenta varias aportaciones: En primer lugar, se presenta un estudio sobre cómo aprovechar la información semántica 3D en la odometría LiDAR, generando una base de conocimiento sobre la contribución de cada tipo de elemento del entorno a la salida de la odometría. Los resultados experimentales se han obtenido utilizando una base de datos pública y se han validado en una plataforma de conducción del mundo real. En segundo lugar, se propone un método para estimar el error de localización utilizando detecciones de puntos de referencia, que posteriormente es explotado por un algoritmo de optimización de posicionamiento de puntos de referencia. Este método, que ha sido validado en un entorno de simulación, es capaz de determinar un conjunto de puntos de referencia para el cual el error de localización nunca supere un límite previamente fijado. Por último, se propone un algoritmo de localización cooperativa basado en un Filtro Genético de Partículas para utilizar las detecciones de vehículos con el fin de mejorar la estimación proporcionada por los sistemas GNSS. El método propuesto ha sido validado mediante múltiples experimentos en diferentes entornos de simulación.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridSecretario: Joshué Manuel Pérez Rastelli.- Secretario: Jorge Villagrá Serrano.- Vocal: Enrique David Martí Muño

    The simultaneous localization and mapping (SLAM):An overview

    Get PDF
    Positioning is a need for many applications related to mapping and navigation either in civilian or military domains. The significant developments in satellite-based techniques, sensors, telecommunications, computer hardware and software, image processing, etc. positively influenced to solve the positioning problem efficiently and instantaneously. Accordingly, the mentioned development empowered the applications and advancement of autonomous navigation. One of the most interesting developed positioning techniques is what is called in robotics as the Simultaneous Localization and Mapping SLAM. The SLAM problem solution has witnessed a quick improvement in the last decades either using active sensors like the RAdio Detection And Ranging (Radar) and Light Detection and Ranging (LiDAR) or passive sensors like cameras. Definitely, positioning and mapping is one of the main tasks for Geomatics engineers, and therefore it's of high importance for them to understand the SLAM topic which is not easy because of the huge documentation and algorithms available and the various SLAM solutions in terms of the mathematical models, complexity, the sensors used, and the type of applications. In this paper, a clear and simplified explanation is introduced about SLAM from a Geomatical viewpoint avoiding going into the complicated algorithmic details behind the presented techniques. In this way, a general overview of SLAM is presented showing the relationship between its different components and stages like the core part of the front-end and back-end and their relation to the SLAM paradigm. Furthermore, we explain the major mathematical techniques of filtering and pose graph optimization either using visual or LiDAR SLAM and introduce a summary of the deep learning efficient contribution to the SLAM problem. Finally, we address examples of some existing practical applications of SLAM in our reality

    Stereo Visual SLAM for Mobile Robots Navigation

    Get PDF
    Esta tesis está enfocada a la combinación de los campos de la robótica móvil y la visión por computador, con el objetivo de desarrollar métodos que permitan a un robot móvil localizarse dentro de su entorno mientras construye un mapa del mismo, utilizando como única entrada un conjunto de imágenes. Este problema se denomina SLAM visual (por las siglas en inglés de "Simultaneous Localization And Mapping") y es un tema que aún continúa abierto a pesar del gran esfuerzo investigador realizado en los últimos años. En concreto, en esta tesis utilizamos cámaras estéreo para capturar, simultáneamente, dos imágenes desde posiciones ligeramente diferentes, proporcionando así información 3D de forma directa. De entre los problemas de localización de robots, en esta tesis abordamos dos de ellos: el seguimiento de robots y la localización y mapeado simultáneo (o SLAM). El primero de ellos no tiene en cuenta el mapa del entorno sino que calcula la trayectoria del robot mediante la composición incremental de las estimaciones de su movimiento entre instantes de tiempo consecutivos. Cuando se usan imágenes para calcular esta trayectoria, el problema toma el nombre de "odometría visual", y su resolución es más sencilla que la del SLAM visual. De hecho, a menudo se integra como parte de un sistema de SLAM completo. Esta tesis contribuye con la propuesta de dos sistemas de odometría visual. Uno de ellos está basado en un solución cerrada y eficiente mientras que el otro está basado en un proceso de optimización no-lineal que implementa un nuevo método de detección y eliminación rápida de espurios. Los métodos de SLAM, por su parte, también abordan la construcción de un mapa del entorno con el objetivo de mejorar sensiblemente la localización del robot, evitando de esta forma la acumulación de error en la que incurre la odometría visual. Además, el mapa construido puede ser empleado para hacer frente a situaciones exigentes como la recuperación de la localización tras la pérdida del robot o realizar localización global. En esta tesis se presentan dos sistemas completos de SLAM visual. Uno de ellos se ha implementado dentro del marco de los filtros probabilísticos no parámetricos, mientras que el otro está basado en un método nuevo de "bundle adjustment" relativo que ha sido integrado con algunas técnicas recientes de visión por computador. Otra contribución de esta tesis es la publicación de dos colecciones de datos que contienen imágenes estéreo capturadas en entornos urbanos sin modificar, así como una estimación del camino real del robot basada en GPS (denominada "ground truth"). Estas colecciones sirven como banco de pruebas para validar métodos de odometría y SLAM visual
    corecore