15 research outputs found

    Improving perception and locomotion capabilities of mobile robots in urban search and rescue missions

    Get PDF
    Nasazení mobilních robotů během zásahů záchranných složek je způsob, jak učinit práci záchranářů bezpečnější a efektivnější. Na roboty jsou ale při takovém použití kladeny vyšší nároky kvůli podmínkám, které při těchto událostech panují. Roboty se musejí pohybovat po nestabilních površích, ve stísněných prostorech nebo v kouři a prachu, což ztěžuje použití některých senzorů. Lokalizace, v robotice běžná úloha spočívající v určení polohy robotu vůči danému souřadnému systému, musí spolehlivě fungovat i za těchto ztížených podmínek. V této dizertační práci popisujeme vývoj lokalizačního systému pásového mobilního robotu, který je určen pro nasazení v případě zemětřesení nebo průmyslové havárie. Nejprve je předveden lokalizační systém, který vychází pouze z měření proprioceptivních senzorů a který vyvstal jako nejlepší varianta při porovnání několika možných uspořádání takového systému. Lokalizace je poté zpřesněna přidáním měření exteroceptivních senzorů, které zpomalují kumulaci nejistoty určení polohy robotu. Zvláštní pozornost je věnována možným výpadkům jednotlivých senzorických modalit, prokluzům pásů, které u tohoto typu robotů nevyhnutelně nastávají, výpočetním nárokům lokalizačního systému a rozdílným vzorkovacím frekvencím jednotlivých senzorů. Dále se věnujeme problému kinematických modelů pro přejíždění vertikálních překážek, což je další zdroj nepřesnosti při lokalizaci pásového robotu. Díky účasti na výzkumných projektech, jejichž členy byly hasičské sbory Itálie, Německa a Nizozemska, jsme měli přístup na cvičiště určená pro přípravu na zásahy během zemětřesení, průmyslových a dopravních nehod. Přesnost našeho lokalizačního systému jsme tedy testovali v podmínkách, které věrně napodobují ty skutečné. Soubory senzorických měření a referenčních poloh, které jsme vytvořili pro testování přesnosti lokalizace, jsou veřejně dostupné a považujeme je za jeden z přínosů naší práce. Tato dizertační práce má podobu souboru tří časopiseckých publikací a jednoho článku, který je v době jejího podání v recenzním řízení.eployment of mobile robots in search and rescue missions is a way to make job of human rescuers safer and more efficient. Such missions, however, require robots to be resilient to harsh conditions of natural disasters or human-inflicted accidents. They have to operate on unstable rough terrain, in confined spaces or in sensory-deprived environments filled with smoke or dust. Localization, a common task in mobile robotics which involves determining position and orientation with respect to a given coordinate frame, faces these conditions as well. In this thesis, we describe development of a localization system for tracked mobile robot intended for search and rescue missions. We present a proprioceptive 6-degrees-of-freedom localization system, which arose from the experimental comparison of several possible sensor fusion architectures. The system was modified to incorporate exteroceptive velocity measurements, which significantly improve accuracy by reducing a localization drift. A special attention was given to potential sensor outages and failures, to track slippage that inevitably occurs with this type of robots, to computational demands of the system and to different sampling rates sensory data arrive with. Additionally, we addressed the problem of kinematic models for tracked odometry on rough terrains containing vertical obstacles. Thanks to research projects the robot was designed for, we had access to training facilities used by fire brigades of Italy, Germany and Netherlands. Accuracy and robustness of proposed localization systems was tested in conditions closely resembling those seen in earthquake aftermath and industrial accidents. Datasets used to test our algorithms are publicly available and they are one of the contributions of this thesis. We form this thesis as a compilation of three published papers and one paper in review process

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Systems engineering approach to develop guidance, navigation and control algorithms for unmanned ground vehicle

    Get PDF
    Despite the growing popularity of unmanned systems being deployed in the military domain, limited research efforts have been dedicated to the progress of ground system developments. Dedicated efforts for unmanned ground vehicles (UGV) focused largely on operations in continental environments, places where vegetation is relatively sparse compared to a tropical jungle or plantation estate commonly found in Asia. This research explore methods for the development of an UGV that would be capable of operating autonomously in a densely cluttered environment such as that found in Asia. This thesis adopted a systems engineering approach to understand the pertinent parameters affecting the performance of the UGV in order to evaluate, design and develop the necessary guidance, navigation and control algorithms for the UGV. The thesis uses methodologies such as the pure pursuit method for path following and the vector field histogram method for obstacle avoidance as the main guidance and control algorithm governing the movement of the UGV. The thesis then considers the use of feature recognition method of image processing to form the basis of the target identification and tracking algorithm.http://archive.org/details/systemsengineeri1094550579Outstanding ThesisMajor, Republic of Singapore ArmyApproved for public release; distribution is unlimited

    EFFECTIVE NAVIGATION AND MAPPING OF A CLUTTERED ENVIRONMENT USING A MOBILE ROBOT

    Get PDF
    Today, the as-is three-dimensional point cloud acquisition process for understanding scenes of interest, monitoring construction progress, and detecting safety hazards uses a laser scanning system mounted on mobile robots, which enables it faster and more automated, but there is still room for improvement. The main disadvantage of data collection using laser scanners is that point cloud data is only collected in a scanner’s line of sight, so regions in three-dimensional space that are occluded by objects are not observable. To solve this problem and obtain a complete reconstruction of sites without information loss, scans must be taken from multiple viewpoints. This thesis describes how such a solution can be integrated into a fully autonomous mobile robot capable of generating a high-resolution three-dimensional point cloud of a cluttered and unknown environment without a prior map. First, the mobile platform estimates unevenness of terrain and surrounding environment. Second, it finds the occluded region in the currently built map and determines the effective next scan location. Then, it moves to that location by using grid-based path planner and unevenness estimation results. Finally, it performs the high-resolution scanning that area to fill out the point cloud map. This process repeats until the designated scan region filled up with scanned point cloud. The mobile platform also keeps scanning for navigation and obstacle avoidance purposes, calculates its relative location, and builds the surrounding map while moving and scanning, a process known as simultaneous localization and mapping. The proposed approaches and the system were tested and validated in an outdoor construction site and a simulated disaster environment with promising results.Ph.D

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore