64 research outputs found

    UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether

    Full text link
    This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle (UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a flying sensor but also as a tether attachment device. Two robots are connected with a tether, allowing the UAV to anchor the tether to a structure located at the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the poor traversability of the UGV by not only providing a wider range of scanning and mapping from the air, but also by allowing the UGV to climb steep terrains with the winding of the tether. In addition, we present an autonomous framework for the collaborative navigation and tether attachment in an unknown environment. The UAV employs visual inertial navigation with 3D voxel mapping and obstacle avoidance planning. The UGV makes use of the voxel map and generates an elevation map to execute path planning based on a traversability analysis. Furthermore, we compared the pros and cons of possible methods for the tether anchoring from multiple points of view. To increase the probability of successful anchoring, we evaluated the anchoring strategy with an experiment. Finally, the feasibility and capability of our proposed system were demonstrated by an autonomous mission experiment in the field with an obstacle and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1

    Development of a ground robot for indoor SLAM using Low‐Cost LiDAR and remote LabVIEW HMI

    Get PDF
    The simultaneous localization and mapping problem (SLAM) is crucial to autonomous navigation and robot mapping. The main purpose of this thesis is to develop a ground robot that implements SLAM to test the performance of the low‐cost RPLiDAR A1M8 by DFRobot. The HectorSLAM package, available in ROS was used with a Raspberry Pi to implement SLAM and build maps. These maps are sent to a remote desktop via TCP/IP communication to be displayed on a LabVIEW HMI where the user can also control robot. The LabVIEW HMI and the project in its entirety is intended to be as easy to use as possible to the layman, with many processes being automated to make this possible. The quality of the maps created by HectorSLAM and the RPLiDAR were evaluated both qualitatively and quanitatively to determine how useful the low‐cost LiDAR can be for this application. It is hoped that the apparatus developed in this project will be used with drones in the future for 3D mapping

    Real-Time Accurate Visual SLAM with Place Recognition

    Get PDF
    El problema de localización y construcción simultánea de mapas (del inglés Simultaneous Localization and Mapping, abreviado SLAM) consiste en localizar un sensor en un mapa que se construye en línea. La tecnología de SLAM hace posible la localización de un robot en un entorno desconocido para él, procesando la información de sus sensores de a bordo y por tanto sin depender de infraestructuras externas. Un mapa permite localizarse en todo momento sin acumular deriva, a diferencia de una odometría donde se integran movimientos incrementales. Este tipo de tecnología es crítica para la navegación de robots de servicio y vehículos autónomos, o para la localización del usuario en aplicaciones de realidad aumentada o virtual. La principal contribución de esta tesis es ORB-SLAM, un sistema de SLAM monocular basado en características que trabaja en tiempo real en ambientes pequeños y grandes, de interior y exterior. El sistema es robusto a elementos dinámicos en la escena, permite cerrar bucles y relocalizar la cámara incluso si el punto de vista ha cambiado significativamente, e incluye un método de inicialización completamente automático. ORB-SLAM es actualmente la solución más completa, precisa y fiable de SLAM monocular empleando una cámara como único sensor. El sistema, estando basado en características y ajuste de haces, ha demostrado una precisión y robustez sin precedentes en secuencias públicas estándar.Adicionalmente se ha extendido ORB-SLAM para reconstruir el entorno de forma semi-densa. Nuestra solución desacopla la reconstrucción semi-densa de la estimación de la trayectoria de la cámara, lo que resulta en un sistema que combina la precisión y robustez del SLAM basado en características con las reconstrucciones más completas de los métodos directos. Además se ha extendido la solución monocular para aprovechar la información de cámaras estéreo, RGB-D y sensores inerciales, obteniendo precisiones superiores a otras soluciones del estado del arte. Con el fin de contribuir a la comunidad científica, hemos hecho libre el código de una implementación de nuestra solución de SLAM para cámaras monoculares, estéreo y RGB-D, siendo la primera solución de código libre capaz de funcionar con estos tres tipos de cámara. Bibliografía:R. Mur-Artal and J. D. Tardós.Fast Relocalisation and Loop Closing in Keyframe-Based SLAM.IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China, June 2014.R. Mur-Artal and J. D. Tardós.ORB-SLAM: Tracking and Mapping Recognizable Features.RSS Workshop on Multi VIew Geometry in RObotics (MVIGRO). Berkeley, USA, July 2014. R. Mur-Artal and J. D. Tardós.Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM.Robotics: Science and Systems (RSS). Rome, Italy, July 2015.R. Mur-Artal, J. M. M. Montiel and J. D. Tardós.ORB-SLAM: A Versatile and Accurate Monocular SLAM System.IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, October 2015.(2015 IEEE Transactions on Robotics Best Paper Award).R. Mur-Artal, and J. D. Tardós.Visual-Inertial Monocular SLAM with Map Reuse.IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796-803, April 2017. (to be presented at ICRA 17).R.Mur-Artal, and J. D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras.ArXiv preprint arXiv:1610.06475, 2016. (under Review).<br /

    Lidar SLAM-Mapping As A Potential Powerline Maintenance Tool

    Get PDF
    The purpose of this master’s thesis is to determine whether it is possible to utilize non-conventional sensory systems for power line inspection during times that more conventional methods are severely restricted. To achieve this, we will utilize lidar-generated point clouds in conjunction with measurement data from inertial measurement units to create a geo-referenced set of point clouds to generate a map of the point-of-interest area. This will be achieved by conjoining raw lidar-data from a lidar sensor and using the data provided from the inertial measurement unit to merge millions of points into a one cohesive map. We will use Robot Op-erating System to achieve the evaluation, fusion and integration of the different streams of data. We will also use Google Cartographer to aid us in the SLAM-mapping of the different sensory data sources. Once a SLAM-mapped point cloud is generated, we can evaluate the accuracy of the data and the possibilities of the generated data to be used as a maintenance tool to assist in detecting and solving various problems that many electrical companies in rural Finland face during their daily business. Such problems are snapped power lines or excess object blocking the power lines. We will want to determine if a system like this could be used when it is impossible for cameras or the naked eye of a human to detect this kind of faults, for example during the night or during a storm

    Use of Unmanned Aerial Systems in Civil Applications

    Get PDF
    Interest in drones has been exponentially growing in the last ten years and these machines are often presented as the optimal solution in a huge number of civil applications (monitoring, agriculture, emergency management etc). However the promises still do not match the data coming from the consumer market, suggesting that the only big field in which the use of small unmanned aerial vehicles is actually profitable is the video-makers’ one. This may be explained partly with the strong limits imposed by existing (and often "obsolete") national regulations, but also - and pheraps mainly - with the lack of real autonomy. The vast majority of vehicles on the market nowadays are infact autonomous only in the sense that they are able to follow a pre-determined list of latitude-longitude-altitude coordinates. The aim of this thesis is to demonstrate that complete autonomy for UAVs can be achieved only with a performing control, reliable and flexible planning platforms and strong perception capabilities; these topics are introduced and discussed by presenting the results of the main research activities performed by the candidate in the last three years which have resulted in 1) the design, integration and control of a test bed for validating and benchmarking visual-based algorithm for space applications; 2) the implementation of a cloud-based platform for multi-agent mission planning; 3) the on-board use of a multi-sensor fusion framework based on an Extended Kalman Filter architecture

    Long-term localization of unmanned aerial vehicles based on 3D environment perception

    Get PDF
    Los vehículos aéreos no tripulados (UAVs por sus siglas en inglés, Unmanned Aerial Vehicles) se utilizan actualmente en innumerables aplicaciones civiles y comerciales, y la tendencia va en aumento. Su operación en espacios exteriores libres de obstáculos basada en GPS (del inglés Global Positioning System) puede ser considerada resuelta debido a la disponibilidad de productos comerciales con cierto grado de madurez. Sin embargo, algunas aplicaciones requieren su uso en espacios confinados o en interiores, donde las señales del GPS no están disponibles. Para permitir la introducción de robots aéreos de manera segura en zonas sin cobertura GPS, es necesario mejorar la fiabilidad en determinadas tecnologías clave para conseguir una operación robusta del sistema, tales como la localización, la evitación de obstáculos y la planificación de trayectorias. Actualmente, las técnicas existentes para la navegación autónoma de robots móviles en zonas sin GPS no son suficientemente fiables cuando se trata de robots aéreos, o no son robustas en el largo plazo. Esta tesis aborda el problema de la localización, proponiendo una metodología adecuada para robots aéreos que se mueven en un entorno tridimensional, utilizando para ello una combinación de medidas obtenidas a partir de varios sensores a bordo. Nos hemos centrado en la fusión de datos procedentes de tres tipos de sensores: imágenes y nubes de puntos adquiridas a partir de cámaras estéreo o de luz estructurada (RGB-D), medidas inerciales de una IMU (del inglés Inertial Measurement Unit) y distancias entre radiobalizas de tecnología UWB (del inglés Ultra Wide-Band) instaladas en el entorno y en la propia aeronave. La localización utiliza un mapa 3D del entorno, para el cual se presenta también un algoritmo de mapeado que explora las sinergias entre nubes de puntos y radiobalizas, con el fin de poder utilizar la metodología al completo en cualquier escenario dado. Las principales contribuciones de esta tesis doctoral se centran en una cuidadosa combinación de tecnologías para lograr una localización de UAVs en interiores válida para operaciones a largo plazo, de manera que sea robusta, fiable y eficiente computacionalmente. Este trabajo ha sido validado y demostrado durante los últimos cuatro años en el contexto de diferentes proyectos de investigación relacionados con la localización y estimación del estado de robots aéreos en zonas sin cobertura GPS. En particular en el proyecto European Robotics Challenges (EuRoC), en el que el autor participa en la competición entre las principales instituciones de investigación de Europa. Los resultados experimentales demuestran la viabilidad de la metodología completa, tanto en términos de precisión como en eficiencia computacional, probados a través de vuelos reales en interiores y siendo éstos validados con datos de un sistema de captura de movimiento.Unmanned Aerial Vehicles (UAVs) are currently used in countless civil and commercial applications, and the trend is rising. Outdoor obstacle-free operation based on Global Positioning System (GPS) can be generally assumed thanks to the availability of mature commercial products. However, some applications require their use in confined spaces or indoors, where GPS signals are not available. In order to allow for the safe introduction of autonomous aerial robots in GPS-denied areas, there is still a need for reliability in several key technologies to procure a robust operation, such as localization, obstacle avoidance and planning. Existing approaches for autonomous navigation in GPS-denied areas are not robust enough when it comes to aerial robots, or fail in long-term operation. This dissertation handles the localization problem, proposing a methodology suitable for aerial robots moving in a Three Dimensional (3D) environment using a combination of measurements from a variety of on-board sensors. We have focused on fusing three types of sensor data: images and 3D point clouds acquired from stereo or structured light cameras, inertial information from an on-board Inertial Measurement Unit (IMU), and distance measurements to several Ultra Wide-Band (UWB) radio beacons installed in the environment. The overall approach makes use of a 3D map of the environment, for which a mapping method that exploits the synergies between point clouds and radio-based sensing is also presented, in order to be able to use the whole methodology in any given scenario. The main contributions of this dissertation focus on a thoughtful combination of technologies in order to achieve robust, reliable and computationally efficient long-term localization of UAVs in indoor environments. This work has been validated and demonstrated for the past four years in the context of different research projects related to the localization and state estimation of aerial robots in GPS-denied areas. In particular the European Robotics Challenges (EuRoC) project, in which the author is participating in the competition among top research institutions in Europe. Experimental results demonstrate the feasibility of our full approach, both in accuracy and computational efficiency, which is tested through real indoor flights and validated with data from a motion capture system
    corecore