257 research outputs found

    Fusing sonars and LRF data to perform SLAM in reduced visibility scenarios

    Get PDF
    Simultaneous Localization and Mapping (SLAM) approaches have evolved considerably in recent years. However, there are many situations which are not easily handled, such as the case of smoky, dusty, or foggy environments where commonly used range sensors for SLAM are highly disturbed by noise induced in the measurement process by particles of smoke, dust or steam. This work presents a sensor fusion method for range sensing in Simultaneous Localization and Mapping (SLAM) under reduced visibility conditions. The proposed method uses the complementary characteristics between a Laser Range Finder (LRF) and an array of sonars in order to ultimately map smoky environments. The method was validated through experiments in a smoky indoor scenario, and results showed that it is able to adequately cope with induced disturbances, thus decreasing the impact of smoke particles in the mapping task

    Models and Algorithms for Ultra-Wideband Localization in Single- and Multi-Robot Systems

    Get PDF
    Location is a piece of information that empowers almost any type of application. In contrast to the outdoors, where global navigation satellite systems provide geo-spatial positioning, there are still millions of square meters of indoor space that are unaccounted for by location sensing technology. Moreover, predictions show that people’s activities are likely to shift more and more towards urban and indoor environments– the United Nations predict that by 2020, over 80% of the world’s population will live in cities. Meanwhile, indoor localization is a problem that is not simply solved: people, indoor furnishings, walls and building structures—in the eyes of a positioning sensor, these are all obstacles that create a very challenging environment. Many sensory modalities have difficulty in overcoming such harsh conditions when used alone. For this reason, and also because we aim for a portable, miniaturizable, cost-effective solution, with centimeter-level accuracy, we choose to solve the indoor localization problem with a hybrid approach that consists of two complementary components: ultra-wideband localization, and collaborative localization. In pursuit of the final, hybrid product, our research leads us to ask what benefits collaborative localization can provide to ultra-wideband localization—and vice versa. The road down this path includes diving into these orthogonal sub-domains of indoor localization to produce two independent localization solutions, before finally combining them to conclude our work. As for all systems that can be quantitatively examined, we recognize that the quality of our final product is defined by the rigor of our evaluation process. Thus, a core element of our work is the experimental setup, which we design in a modular fashion, and which we complexify incrementally according to the various stages of our studies. With the goal of implementing an evaluation system that is systematic, repeatable, and controllable, our approach is centered around the mobile robot. We harness this platform to emulate mobile targets, and track it in real-time with a highly reliable ground truth positioning system. Furthermore, we take advantage of the miniature size of our mobile platform, and include multiple entities to form a multi-robot system. This augmented setup then allows us to use the same experimental rigor to evaluate our collaborative localization strategies. Finally, we exploit the consistency of our experiments to perform cross-comparisons of the various results throughout the presented work. Ultra-wideband counts among the most interesting technologies for absolute indoor localization known to date. Owing to its fine delay resolution and its ability to penetrate through various materials, ultra-wideband provides a potentially high ranging accuracy, even in cluttered, non-line-of-sight environments. However, despite its desirable traits, the resolution of non-line-of-sight signals remains a hard problem. In other words, if a non-line-of-sight signal is not recognized as such, it leads to significant errors in the position estimate. Our work improves upon state-of-the-art by addressing the peculiarities of ultra-wideband signal propagation with models that capture the spatiality as well as the multimodal nature of the error statistics. Simultaneously, we take care to develop an underlying error model that is compact and that can be calibrated by means of efficient algorithms. In order to facilitate the usage of our multimodal error model, we use a localization algorithm that is based on particle filters. Our collaborative localization strategy distinguishes itself from prior work by emphasizing cost-efficiency, full decentralization, and scalability. The localization method is based on relative positioning and uses two quantities: relative range and relative bearing. We develop a relative robot detection model that integrates these measurements, and is embedded in our particle filter based localization framework. In addition to the robot detection model, we consider an algorithmic component, namely a reciprocal particle sampling routine, which is designed to facilitate the convergence of a robot’s position estimate. Finally, in order to reduce the complexity of our collaborative localization algorithm, and in order to reduce the amount of positioning data to be communicated between the robots, we develop a particle clustering method, which is used in conjunction with our robot detection model. The final stage of our research investigates the combined roles of collaborative localization and ultra-wideband localization. Numerous experiments are able to validate our overall localization strategy, and show that the performance can be significantly improved when using two complementary sensory modalities. Since the fusion of ultra-wideband positioning sensors with exteroceptive sensors has hardly been considered so far, our studies present pioneering work in this domain. Several insights indicate that collaboration—even if through noisy sensors—is a useful tool to reduce localization errors. In particular, we show that our collaboration strategy can provide the means to minimize the localization error, given that the collaborative design parameters are optimally tuned. Our final results show median localization errors below 10 cm in cluttered environments

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape

    Get PDF
    Motivated by the tremendous progress we witnessed in recent years, this paper presents a survey of the scientific literature on the topic of Collaborative Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM. With fleets of self-driving cars on the horizon and the rise of multi-robot systems in industrial applications, we believe that Collaborative SLAM will soon become a cornerstone of future robotic applications. In this survey, we introduce the basic concepts of C-SLAM and present a thorough literature review. We also outline the major challenges and limitations of C-SLAM in terms of robustness, communication, and resource management. We conclude by exploring the area's current trends and promising research avenues.Comment: 44 pages, 3 figure

    Long-term localization of unmanned aerial vehicles based on 3D environment perception

    Get PDF
    Los vehículos aéreos no tripulados (UAVs por sus siglas en inglés, Unmanned Aerial Vehicles) se utilizan actualmente en innumerables aplicaciones civiles y comerciales, y la tendencia va en aumento. Su operación en espacios exteriores libres de obstáculos basada en GPS (del inglés Global Positioning System) puede ser considerada resuelta debido a la disponibilidad de productos comerciales con cierto grado de madurez. Sin embargo, algunas aplicaciones requieren su uso en espacios confinados o en interiores, donde las señales del GPS no están disponibles. Para permitir la introducción de robots aéreos de manera segura en zonas sin cobertura GPS, es necesario mejorar la fiabilidad en determinadas tecnologías clave para conseguir una operación robusta del sistema, tales como la localización, la evitación de obstáculos y la planificación de trayectorias. Actualmente, las técnicas existentes para la navegación autónoma de robots móviles en zonas sin GPS no son suficientemente fiables cuando se trata de robots aéreos, o no son robustas en el largo plazo. Esta tesis aborda el problema de la localización, proponiendo una metodología adecuada para robots aéreos que se mueven en un entorno tridimensional, utilizando para ello una combinación de medidas obtenidas a partir de varios sensores a bordo. Nos hemos centrado en la fusión de datos procedentes de tres tipos de sensores: imágenes y nubes de puntos adquiridas a partir de cámaras estéreo o de luz estructurada (RGB-D), medidas inerciales de una IMU (del inglés Inertial Measurement Unit) y distancias entre radiobalizas de tecnología UWB (del inglés Ultra Wide-Band) instaladas en el entorno y en la propia aeronave. La localización utiliza un mapa 3D del entorno, para el cual se presenta también un algoritmo de mapeado que explora las sinergias entre nubes de puntos y radiobalizas, con el fin de poder utilizar la metodología al completo en cualquier escenario dado. Las principales contribuciones de esta tesis doctoral se centran en una cuidadosa combinación de tecnologías para lograr una localización de UAVs en interiores válida para operaciones a largo plazo, de manera que sea robusta, fiable y eficiente computacionalmente. Este trabajo ha sido validado y demostrado durante los últimos cuatro años en el contexto de diferentes proyectos de investigación relacionados con la localización y estimación del estado de robots aéreos en zonas sin cobertura GPS. En particular en el proyecto European Robotics Challenges (EuRoC), en el que el autor participa en la competición entre las principales instituciones de investigación de Europa. Los resultados experimentales demuestran la viabilidad de la metodología completa, tanto en términos de precisión como en eficiencia computacional, probados a través de vuelos reales en interiores y siendo éstos validados con datos de un sistema de captura de movimiento.Unmanned Aerial Vehicles (UAVs) are currently used in countless civil and commercial applications, and the trend is rising. Outdoor obstacle-free operation based on Global Positioning System (GPS) can be generally assumed thanks to the availability of mature commercial products. However, some applications require their use in confined spaces or indoors, where GPS signals are not available. In order to allow for the safe introduction of autonomous aerial robots in GPS-denied areas, there is still a need for reliability in several key technologies to procure a robust operation, such as localization, obstacle avoidance and planning. Existing approaches for autonomous navigation in GPS-denied areas are not robust enough when it comes to aerial robots, or fail in long-term operation. This dissertation handles the localization problem, proposing a methodology suitable for aerial robots moving in a Three Dimensional (3D) environment using a combination of measurements from a variety of on-board sensors. We have focused on fusing three types of sensor data: images and 3D point clouds acquired from stereo or structured light cameras, inertial information from an on-board Inertial Measurement Unit (IMU), and distance measurements to several Ultra Wide-Band (UWB) radio beacons installed in the environment. The overall approach makes use of a 3D map of the environment, for which a mapping method that exploits the synergies between point clouds and radio-based sensing is also presented, in order to be able to use the whole methodology in any given scenario. The main contributions of this dissertation focus on a thoughtful combination of technologies in order to achieve robust, reliable and computationally efficient long-term localization of UAVs in indoor environments. This work has been validated and demonstrated for the past four years in the context of different research projects related to the localization and state estimation of aerial robots in GPS-denied areas. In particular the European Robotics Challenges (EuRoC) project, in which the author is participating in the competition among top research institutions in Europe. Experimental results demonstrate the feasibility of our full approach, both in accuracy and computational efficiency, which is tested through real indoor flights and validated with data from a motion capture system

    Sensor Modalities and Fusion for Robust Indoor Localisation

    Get PDF

    Realization Limits of Impulse-Radio UWB Indoor Localization Systems

    Get PDF
    In this work, the realization limits of an impulse-based Ultra-Wideband (UWB) localization system for indoor applications have been thoroughly investigated and verified by measurements. The analysis spans from the position calculation algorithms, through hardware realization and modeling, up to the localization experiments conducted in realistic scenarios. The main focus was put on identification and characterization of limiting factors as well as developing methods to overcome them
    corecore