16 research outputs found

    Active Collaborative Localization in Heterogeneous Robot Teams

    Full text link
    Accurate and robust state estimation is critical for autonomous navigation of robot teams. This task is especially challenging for large groups of size, weight, and power (SWAP) constrained aerial robots operating in perceptually-degraded GPS-denied environments. We can, however, actively increase the amount of perceptual information available to such robots by augmenting them with a small number of more expensive, but less resource-constrained, agents. Specifically, the latter can serve as sources of perceptual information themselves. In this paper, we study the problem of optimally positioning (and potentially navigating) a small number of more capable agents to enhance the perceptual environment for their lightweight,inexpensive, teammates that only need to rely on cameras and IMUs. We propose a numerically robust, computationally efficient approach to solve this problem via nonlinear optimization. Our method outperforms the standard approach based on the greedy algorithm, while matching the accuracy of a heuristic evolutionary scheme for global optimization at a fraction of its running time. Ultimately, we validate our solution in both photorealistic simulations and real-world experiments. In these experiments, we use lidar-based autonomous ground vehicles as the more capable agents, and vision-based aerial robots as their SWAP-constrained teammates. Our method is able to reduce drift in visual-inertial odometry by as much as 90%, and it outperforms random positioning of lidar-equipped agents by a significant margin. Furthermore, our method can be generalized to different types of robot teams with heterogeneous perception capabilities. It has a wide range of applications, such as surveying and mapping challenging dynamic environments, and enabling resilience to large-scale perturbations that can be caused by earthquakes or storms.Comment: To appear in Robotics: Science and Systems (RSS) 202

    A Survey on Aerial Swarm Robotics

    Get PDF
    The use of aerial swarms to solve real-world problems has been increasing steadily, accompanied by falling prices and improving performance of communication, sensing, and processing hardware. The commoditization of hardware has reduced unit costs, thereby lowering the barriers to entry to the field of aerial swarm robotics. A key enabling technology for swarms is the family of algorithms that allow the individual members of the swarm to communicate and allocate tasks amongst themselves, plan their trajectories, and coordinate their flight in such a way that the overall objectives of the swarm are achieved efficiently. These algorithms, often organized in a hierarchical fashion, endow the swarm with autonomy at every level, and the role of a human operator can be reduced, in principle, to interactions at a higher level without direct intervention. This technology depends on the clever and innovative application of theoretical tools from control and estimation. This paper reviews the state of the art of these theoretical tools, specifically focusing on how they have been developed for, and applied to, aerial swarms. Aerial swarms differ from swarms of ground-based vehicles in two respects: they operate in a three-dimensional space and the dynamics of individual vehicles adds an extra layer of complexity. We review dynamic modeling and conditions for stability and controllability that are essential in order to achieve cooperative flight and distributed sensing. The main sections of this paper focus on major results covering trajectory generation, task allocation, adversarial control, distributed sensing, monitoring, and mapping. Wherever possible, we indicate how the physics and subsystem technologies of aerial robots are brought to bear on these individual areas

    Visual-Inertial State Estimation With Information Deficiency

    Get PDF
    State estimation is an essential part of intelligent navigation and mapping systems where tracking the location of a smartphone, car, robot, or a human-worn device is required. For autonomous systems such as micro aerial vehicles and self-driving cars, it is a prerequisite for control and motion planning. For AR/VR applications, it is the first step to image rendering. Visual-inertial odometry (VIO) is the de-facto standard algorithm for embedded platforms because it lends itself to lightweight sensors and processors, and maturity in research and industrial development. Various approaches have been proposed to achieve accurate real-time tracking, and numerous open-source software and datasets are available. However, errors and outliers are common due to the complexity of visual measurement processes and environmental changes, and in practice, estimation drift is inevitable. In this thesis, we introduce the concept of information deficiency in state estimation and how to utilize this concept to develop and improve VIO systems. We look into the information deficiencies in visual-inertial state estimation, which are often present and ignored, causing system failures and drift. In particular, we investigate three critical cases of information deficiency in visual-inertial odometry: low texture environment with limited computation, monocular visual odometry, and inertial odometry. We consider these systems under three specific application settings: a lightweight quadrotor platform in autonomous flight, driving scenarios, and AR/VR headset for pedestrians. We address the challenges in each application setting and explore how the tight fusion of deep learning and model-based VIO can improve the state-of-the-art system performance and compensate for the lack of information in real-time. We identify deep learning as a key technology in tackling the information deficiencies in state estimation. We argue that developing hybrid frameworks that leverage its advantage and enable supervision for performance guarantee provides the most accurate and robust solution to state estimation

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    A review of UAV autonomous navigation in GPS-denied environments

    Get PDF
    Unmanned aerial vehicles (UAVs) have drawn increased research interest in recent years, leading to a vast number of applications, such as, terrain exploration, disaster assistance and industrial inspection. Unlike UAV navigation in outdoor environments that rely on GPS (Global Positioning System) for localization, indoor navigation cannot rely on GPS due to the poor quality or lack of signal. Although some reviewing papers particularly summarized indoor navigation strategies (e.g., Visual-based Navigation) or their specific sub-components (e.g., localization and path planning) in detail, there still lacks a comprehensive survey for the complete navigation strategies that cover different technologies. This paper proposes a taxonomy which firstly classifies the navigation strategies into Mapless and Map-based ones based on map usage and then, respectively categorizes the Mapless navigation into Integrated, Direct and Indirect approaches via common characteristics. The Map-based navigation is then split into Known Map/Spaces and Map-building via prior knowledge. In order to analyze these navigation strategies, this paper uses three evaluation metrics (Path Length, Deviation Rate and Exploration Efficiency) according to the common purposes of navigation to show how well they can perform. Furthermore, three representative strategies were selected and 120 flying experiments conducted in two reality-like simulated indoor environments to show their performances against the evaluation metrics proposed in this paper, i.e., the ratio of Successful Flight, the Mean time of Successful Flight, the Mean Length of Successful Flight, the Mean time of Flight, and the Mean Length of Flight. In comparison to the CNN-based Supervised Learning (directly maps visual observations to UAV controls) and the Frontier-based navigation (necessitates continuous global map generation), the experiments show that the CNN-based Distance Estimation for navigation trades off the ratio of Successful Flight and the required time and path length. Moreover, this paper identifies the current challenges and opportunities which will drive UAV navigation research in GPS-denied environments

    Multi-robot Collaborative Visual Navigation with Micro Aerial Vehicles

    Get PDF
    Micro Aerial Vehicles (MAVs), particularly multi-rotor MAVs have gained significant popularity in the autonomous robotics research field. The small size and agility of these aircraft makes them safe to use in contained environments. As such MAVs have numerous applications with respect to both the commercial and research fields, such as Search and Rescue (SaR), surveillance, inspection and aerial mapping. In order for an autonomous MAV to safely and reliably navigate within a given environment the control system must be able to determine the state of the aircraft at any given moment. The state consists of a number of extrinsic variables such as the position, velocity and attitude of the MAV. The most common approach for outdoor operations is the Global Positioning System (GPS). While GPS has been widely used for long range navigation in open environments, its performance degrades significantly in constrained environments and is unusable indoors. As a result state estimation for MAVs in such constrained environments is a popular and exciting research area. Many successful solutions have been developed using laser-range finder sensors. These sensors provide very accurate measurements at the cost of increased power and weight requirements. Cameras offer an attractive alternative state estimation sensor; they offer high information content per image coupled with light weight and low power consumption. As a result much recent work has focused on state estimation on MAVs where a camera is the only exteroceptive sensor. Much of this recent work focuses on single MAVs, however it is the author's belief that the full potential and benefits of the MAV platform can only be realised when teams of MAVs are able to cooperatively perform tasks such as SaR or mapping. Therefore the work presented in this thesis focuses on the problem of vision-based navigation for MAVs from a multi-robot perspective. Multi-robot visual navigation presents a number of challenges, as not only must the MAVs be able to estimate their state from visual observations of the environment but they must also be able to share the information they gain about their environment with other members of the team in a meaningful fashion. The meaningful sharing of observations is achieved when the MAVs have a common frame of reference for both positioning and observations. Such meaningful information sharing is key to achieving cooperative multi-robot navigation. In this thesis two main ideas are explored to address these issues. Firstly the idea of appearance based (re)-localisation is explored as a means of establishing a common reference frame for multiple MAVs. This approach allows a team of MAVs to very easily establish a common frame of reference prior to starting their mission. The common reference frame allows all subsequent operations, such as surveillance or mapping, to proceed with direct cooperative between all MAVs. The second idea focuses on the structure and nature of the inter-robot communication with respect to visual navigation; the thesis explores how a partially distributed architecture can be used to vastly improve the scalability and robustness of a multi-MAV visual navigation framework. A navigation framework would not be complete without a means of control. In the multi-robot setting the control problem is complicated by the need for inter-robot collision avoidance. This thesis presents a MAV trajectory controller based on a combination of classical control theory and distributed Velocity Obstacle (VO) based collision avoidance. Once a means of control is established an autonomous multi-MAV team requires a mission. One such mission is the task of exploration; that is exploration of a previously unknown environment in order to produce a map and/or search for objects of interest. This thesis also addressed the problem of multi-robot exploration using only the sparse interest-point data collected from the visual navigation system. In a multi-MAV exploration scenario the problem of task allocation, assigning areas to each MAV to explore, can be a challenging one. An auction-based protocol is considered to address the task allocation problem. The two applications discussed, VO-based trajectory control and auction-based environment exploration, form two case studies which serve as the partial basis of the evaluation of the navigation solutions presented in this thesis. In summary the visual navigation systems presented in this thesis allow MAVs to cooperatively perform task such as collision avoidance and environment exploration in a robust and efficient manner, with large teams of MAVs. The work presented is a step in the direction of fully autonomous teams of MAVs performing complex, dangerous and useful tasks in the real world

    Proceedings of the International Micro Air Vehicles Conference and Flight Competition 2017 (IMAV 2017)

    Get PDF
    The IMAV 2017 conference has been held at ISAE-SUPAERO, Toulouse, France from Sept. 18 to Sept. 21, 2017. More than 250 participants coming from 30 different countries worldwide have presented their latest research activities in the field of drones. 38 papers have been presented during the conference including various topics such as Aerodynamics, Aeroacoustics, Propulsion, Autopilots, Sensors, Communication systems, Mission planning techniques, Artificial Intelligence, Human-machine cooperation as applied to drones

    Control and visual navigation for unmanned underwater vehicles

    Get PDF
    Ph. D. Thesis.Control and navigation systems are key for any autonomous robot. Due to environmental disturbances, model uncertainties and nonlinear dynamic systems, reliable functional control is essential and improvements in the controller design can significantly benefit the overall performance of Unmanned Underwater Vehicles (UUVs). Analogously, due to electromagnetic attenuation in underwater environments, the navigation of UUVs is always a challenging problem. In this thesis, control and navigation systems for UUVs are investigated. In the control field, four different control strategies have been considered: Proportional-Integral-Derivative Control (PID), Improved Sliding Mode Control (SMC), Backstepping Control (BC) and customised Fuzzy Logic Control (FLC). The performances of these four controllers were initially simulated and subsequently evaluated by practical experiments in different conditions using an underwater vehicle in a tank. The results show that the improved SMC is more robust than the others with small settling time, overshoot, and error. In the navigation field, three underwater visual navigation systems have been developed in the thesis: ArUco Underwater Navigation systems, a novel Integrated Visual Odometry with Monocular camera (IVO-M), and a novel Integrated Visual Odometry with Stereo camera (IVO-S). Compared with conventional underwater navigation, these methods are relatively low-cost solutions and unlike other visual or inertial-visual navigation methods, they are able to work well in an underwater sparse-feature environment. The results show the following: the ArUco underwater navigation system does not suffer from cumulative error, but some segments in the estimated trajectory are not consistent; IVO-M suffers from cumulative error (error ratio is about 3 - 4%) and is limited by the assumption that the “seabed is locally flat”; IVO-S suffers from small cumulative errors (error ratio is less than 2%). Overall, this thesis contributes to the control and navigation systems of UUVs, presenting the comparison between controllers, the improved SMC, and low-cost underwater visual navigation methods

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver
    corecore