105 research outputs found

    UWB-based system for UAV Localization in GNSS-Denied Environments: Characterization and Dataset

    Full text link
    Small unmanned aerial vehicles (UAV) have penetrated multiple domains over the past years. In GNSS-denied or indoor environments, aerial robots require a robust and stable localization system, often with external feedback, in order to fly safely. Motion capture systems are typically utilized indoors when accurate localization is needed. However, these systems are expensive and most require a fixed setup. Recently, visual-inertial odometry and similar methods have advanced to a point where autonomous UAVs can rely on them for localization. The main limitation in this case comes from the environment, as well as in long-term autonomy due to accumulating error if loop closure cannot be performed efficiently. For instance, the impact of low visibility due to dust or smoke in post-disaster scenarios might render the odometry methods inapplicable. In this paper, we study and characterize an ultra-wideband (UWB) system for navigation and localization of aerial robots indoors based on Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be battery powered in its totality. We show the viability of this system for autonomous flight of UAVs, and provide open-source methods and data that enable its widespread application even with movable anchor systems. We characterize the accuracy based on the position of the UAV with respect to the anchors, its altitude and speed, and the distribution of the anchors in space. Finally, we analyze the accuracy of the self-calibration of the anchors' positions.Comment: Accepted to the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020

    A novel outlier removal method for two-dimensional radar odometry

    Get PDF
    Autonomous navigation of platforms in complex environments has a key role in many applications. However, the environmental conditions could negatively affect the performance of electro-optical sensors. Hence, the idea of using radar odometry has been recently developed. However, it suffers from the presence of outliers in the scene as its electro-optical counterparts. This work presents a method to classify radar echoes as inliers or outliers for two-dimensional radar odometry, based on their range rate and bearing angle. The range rate and bearing angle are in fact combined to give a classification value, different for each target. At each acquisition time, the median of all classification values is computed. Since classification values of stationary targets, i.e. the inliers, cluster around the median, while moving targets, i.e. the outliers, exhibit larger distance from the median, stationary targets and moving targets can be separated. This is also useful for Sense-and-Avoid purposes. The method has been tested in simulated scenario to show effectiveness in detecting outliers and in real case scenario to demonstrate significant improvement in reconstruction of trajectory of platform, keeping the final error around 10% of the travelled distance. Further improvement is envisaged by integrating the method in the target tracking strategy

    A Review of Radio Frequency Based Localization for Aerial and Ground Robots with 5G Future Perspectives

    Full text link
    Efficient localization plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned aerial vehicles (UAVs), which would contribute to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities for enhancing localization of UAVs and UGVs. In this paper, we review the radio frequency (RF) based approaches for localization. We review the RF features that can be utilized for localization and investigate the current methods suitable for Unmanned vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localization for both UAVs and UGVs is examined, and the envisioned 5G NR for localization enhancement, and the future research direction are explored

    Reliable Navigation for SUAS in Complex Indoor Environments

    Get PDF
    Indoor environments are a particular challenge for Unmanned Aerial Vehicles (UAVs). Effective navigation through these GPS-denied environments require alternative localization systems, as well as methods of sensing and avoiding obstacles while remaining on-task. Additionally, the relatively small clearances and human presence characteristic of indoor spaces necessitates a higher level of precision and adaptability than is common in traditional UAV flight planning and execution. This research blends the optimization of individual technologies, such as state estimation and environmental sensing, with system integration and high-level operational planning. The combination of AprilTag visual markers, multi-camera Visual Odometry, and IMU data can be used to create a robust state estimator that describes position, velocity, and rotation of a multicopter within an indoor environment. However these data sources have unique, nonlinear characteristics that should be understood to effectively plan for their usage in an automated environment. The research described herein begins by analyzing the unique characteristics of these data streams in order to create a highly-accurate, fault-tolerant state estimator. Upon this foundation, the system built, tested, and described herein uses Visual Markers as navigation anchors, visual odometry for motion estimation and control, and then uses depth sensors to maintain an up-to-date map of the UAV\u27s immediate surroundings. It develops and continually refines navigable routes through a novel combination of pre-defined and sensory environmental data. Emphasis is put on the real-world development and testing of the system, through discussion of computational resource management and risk reduction

    Towards Precise Positioning and Movement of UAVs for Near-Wall Tasks in GNSS-Denied Environments

    Get PDF
    Abstract: UAVs often perform tasks that require flying close to walls or structures and in environments where a satellite-based location is not possible. Flying close to solid bodies implies a higher risk of collisions, thus requiring an increase in the precision of the measurement and control of the UAV’s position. The aerodynamic distortions generated by nearby walls or other objects are also relevant, making the control more complex and further placing demands on the positioning system. Performing wall-related tasks implies flying very close to the wall and, in some cases, even touching it. This work presents a Near-Wall Positioning System (NWPS) based on the combination of an Ultra-wideband (UWB) solution and LIDAR-based range finders. This NWPS has been developed and tested to allow precise positioning and orientation of a multirotor UAV relative to a wall when performing tasks near it. Specific position and orientation control hardware based on horizontal thrusters has also been designed, allowing the UAV to move smoothly and safely near walls.Ministerio de Ciencia, Innovación y Universidades; RTI2018-101114-B-I00), Xunta de Galicia; ED431C2017/12)

    Long-term localization of unmanned aerial vehicles based on 3D environment perception

    Get PDF
    Los vehículos aéreos no tripulados (UAVs por sus siglas en inglés, Unmanned Aerial Vehicles) se utilizan actualmente en innumerables aplicaciones civiles y comerciales, y la tendencia va en aumento. Su operación en espacios exteriores libres de obstáculos basada en GPS (del inglés Global Positioning System) puede ser considerada resuelta debido a la disponibilidad de productos comerciales con cierto grado de madurez. Sin embargo, algunas aplicaciones requieren su uso en espacios confinados o en interiores, donde las señales del GPS no están disponibles. Para permitir la introducción de robots aéreos de manera segura en zonas sin cobertura GPS, es necesario mejorar la fiabilidad en determinadas tecnologías clave para conseguir una operación robusta del sistema, tales como la localización, la evitación de obstáculos y la planificación de trayectorias. Actualmente, las técnicas existentes para la navegación autónoma de robots móviles en zonas sin GPS no son suficientemente fiables cuando se trata de robots aéreos, o no son robustas en el largo plazo. Esta tesis aborda el problema de la localización, proponiendo una metodología adecuada para robots aéreos que se mueven en un entorno tridimensional, utilizando para ello una combinación de medidas obtenidas a partir de varios sensores a bordo. Nos hemos centrado en la fusión de datos procedentes de tres tipos de sensores: imágenes y nubes de puntos adquiridas a partir de cámaras estéreo o de luz estructurada (RGB-D), medidas inerciales de una IMU (del inglés Inertial Measurement Unit) y distancias entre radiobalizas de tecnología UWB (del inglés Ultra Wide-Band) instaladas en el entorno y en la propia aeronave. La localización utiliza un mapa 3D del entorno, para el cual se presenta también un algoritmo de mapeado que explora las sinergias entre nubes de puntos y radiobalizas, con el fin de poder utilizar la metodología al completo en cualquier escenario dado. Las principales contribuciones de esta tesis doctoral se centran en una cuidadosa combinación de tecnologías para lograr una localización de UAVs en interiores válida para operaciones a largo plazo, de manera que sea robusta, fiable y eficiente computacionalmente. Este trabajo ha sido validado y demostrado durante los últimos cuatro años en el contexto de diferentes proyectos de investigación relacionados con la localización y estimación del estado de robots aéreos en zonas sin cobertura GPS. En particular en el proyecto European Robotics Challenges (EuRoC), en el que el autor participa en la competición entre las principales instituciones de investigación de Europa. Los resultados experimentales demuestran la viabilidad de la metodología completa, tanto en términos de precisión como en eficiencia computacional, probados a través de vuelos reales en interiores y siendo éstos validados con datos de un sistema de captura de movimiento.Unmanned Aerial Vehicles (UAVs) are currently used in countless civil and commercial applications, and the trend is rising. Outdoor obstacle-free operation based on Global Positioning System (GPS) can be generally assumed thanks to the availability of mature commercial products. However, some applications require their use in confined spaces or indoors, where GPS signals are not available. In order to allow for the safe introduction of autonomous aerial robots in GPS-denied areas, there is still a need for reliability in several key technologies to procure a robust operation, such as localization, obstacle avoidance and planning. Existing approaches for autonomous navigation in GPS-denied areas are not robust enough when it comes to aerial robots, or fail in long-term operation. This dissertation handles the localization problem, proposing a methodology suitable for aerial robots moving in a Three Dimensional (3D) environment using a combination of measurements from a variety of on-board sensors. We have focused on fusing three types of sensor data: images and 3D point clouds acquired from stereo or structured light cameras, inertial information from an on-board Inertial Measurement Unit (IMU), and distance measurements to several Ultra Wide-Band (UWB) radio beacons installed in the environment. The overall approach makes use of a 3D map of the environment, for which a mapping method that exploits the synergies between point clouds and radio-based sensing is also presented, in order to be able to use the whole methodology in any given scenario. The main contributions of this dissertation focus on a thoughtful combination of technologies in order to achieve robust, reliable and computationally efficient long-term localization of UAVs in indoor environments. This work has been validated and demonstrated for the past four years in the context of different research projects related to the localization and state estimation of aerial robots in GPS-denied areas. In particular the European Robotics Challenges (EuRoC) project, in which the author is participating in the competition among top research institutions in Europe. Experimental results demonstrate the feasibility of our full approach, both in accuracy and computational efficiency, which is tested through real indoor flights and validated with data from a motion capture system

    VIR-SLAM: Visual, Inertial, and Ranging SLAM for single and multi-robot systems

    Full text link
    Monocular cameras coupled with inertial measurements generally give high performance visual inertial odometry. However, drift can be significant with long trajectories, especially when the environment is visually challenging. In this paper, we propose a system that leverages ultra-wideband ranging with one static anchor placed in the environment to correct the accumulated error whenever the anchor is visible. We also use this setup for collaborative SLAM: different robots use mutual ranging (when available) and the common anchor to estimate the transformation between each other, facilitating map fusion Our system consists of two modules: a double layer ranging, visual, and inertial odometry for single robots, and a transformation estimation module for collaborative SLAM. We test our system on public datasets by simulating an ultra-wideband sensor as well as on real robots. Experiments show our method can outperform state-of-the-art visual-inertial odometry by more than 20%. For visually challenging environments, our method works even the visual-inertial odometry has significant drift Furthermore, we can compute the collaborative SLAM transformation matrix at almost no extra computation cost

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot
    corecore