517 research outputs found

    RadarSLAM: Radar based Large-Scale SLAM in All Weathers

    Full text link
    Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been presented in last decade using different sensor modalities. However, robust SLAM in extreme weather conditions is still an open research problem. In this paper, RadarSLAM, a full radar based graph SLAM system, is proposed for reliable localization and mapping in large-scale environments. It is composed of pose tracking, local mapping, loop closure detection and pose graph optimization, enhanced by novel feature matching and probabilistic point cloud generation on radar images. Extensive experiments are conducted on a public radar dataset and several self-collected radar sequences, demonstrating the state-of-the-art reliability and localization accuracy in various adverse weather conditions, such as dark night, dense fog and heavy snowfall

    Influence of complex environments on LiDAR-Based robot navigation

    Get PDF
    La navigation sécuritaire et efficace des robots mobiles repose grandement sur l’utilisation des capteurs embarqués. L’un des capteurs qui est de plus en plus utilisé pour cette tâche est le Light Detection And Ranging (LiDAR). Bien que les recherches récentes montrent une amélioration des performances de navigation basée sur les LiDARs, faire face à des environnements non structurés complexes ou des conditions météorologiques difficiles reste problématique. Dans ce mémoire, nous présentons une analyse de l’influence de telles conditions sur la navigation basée sur les LiDARs. Notre première contribution est d’évaluer comment les LiDARs sont affectés par les flocons de neige durant les tempêtes de neige. Pour ce faire, nous créons un nouvel ensemble de données en faisant l’acquisition de données durant six précipitations de neige. Une analyse statistique de ces ensembles de données, nous caractérisons la sensibilité de chaque capteur et montrons que les mesures de capteurs peuvent être modélisées de manière probabilistique. Nous montrons aussi que les précipitations de neige ont peu d’influence au-delà de 10 m. Notre seconde contribution est d’évaluer l’impact de structures tridimensionnelles complexes présentes en forêt sur les performances d’un algorithme de reconnaissance d’endroits. Nous avons acquis des données dans un environnement extérieur structuré et en forêt, ce qui permet d’évaluer l’influence de ces derniers sur les performances de reconnaissance d’endroits. Notre hypothèse est que, plus deux balayages laser sont proches l’un de l’autre, plus la croyance que ceux-ci proviennent du même endroit sera élevée, mais modulé par le niveau de complexité de l’environnement. Nos expériences confirment que la forêt, avec ses réseaux de branches compliqués et son feuillage, produit plus de données aberrantes et induit une chute plus rapide des performances de reconnaissance en fonction de la distance. Notre conclusion finale est que, les environnements complexes étudiés influencent négativement les performances de navigation basée sur les LiDARs, ce qui devrait être considéré pour développer des algorithmes de navigation robustes.To ensure safe and efficient navigation, mobile robots heavily rely on their ability to use on-board sensors. One such sensor, increasingly used for robot navigation, is the Light Detection And Ranging (LiDAR). Although recent research showed improvement in LiDAR-based navigation, dealing with complex unstructured environments or difficult weather conditions remains problematic. In this thesis, we present an analysis of the influence of such challenging conditions on LiDAR-based navigation. Our first contribution is to evaluate how LiDARs are affected by snowflakes during snowstorms. To this end, we create a novel dataset by acquiring data during six snowfalls using four sensors simultaneously. Based on statistical analysis of this dataset, we characterized the sensitivity of each device and showed that sensor measurements can be modelled in a probabilistic manner. We also showed that falling snow has little impact beyond a range of 10 m. Our second contribution is to evaluate the impact of complex of three-dimensional structures, present in forests, on the performance of a LiDAR-based place recognition algorithm. We acquired data in structured outdoor environment and in forest, which allowed evaluating the impact of the environment on the place recognition performance. Our hypothesis was that the closer two scans are acquired from each other, the higher the belief that the scans originate from the same place will be, but modulated by the level of complexity of the environments. Our experiments confirmed that forests, with their intricate network of branches and foliage, produce more outliers and induce recognition performance to decrease more quickly with distance when compared with structured outdoor environment. Our conclusion is that falling snow conditions and forest environments negatively impact LiDAR-based navigation performance, which should be considered to develop robust navigation algorithms

    Sonar sensor interpretation for ectogeneous robots

    Get PDF
    We have developed four generations of sonar scanning systems to automatically interpret surrounding environment. The first two are stationary 3D air-coupled ultrasound scanning systems and the last two are packaged as sensor heads for mobile robots. Template matching analysis is applied to distinguish simple indoor objects. It is conducted by comparing the tested echo with the reference echoes. Important features are then extracted and drawn in the phase plane. The computer then analyzes them and gives the best choices of the tested echoes automatically. For cylindrical objects outside, an algorithm has been presented to distinguish trees from smooth circular poles based on analysis of backscattered sonar echoes. The echo data is acquired by a mobile robot which has a 3D air-coupled ultrasound scanning system packaged as the sensor head. Four major steps are conducted. The final Average Asymmetry-Average Squared Euclidean Distance phase plane is segmented to tell a tree from a pole by the location of the data points for the objects interested. For extended objects outside, we successfully distinguished seven objects in the campus by taking a sequence scans along each object, obtaining the corresponding backscatter vs. scan angle plots, forming deformable template matching, extracting interesting feature vectors and then categorizing them in a hyper-plane. We have also successfully taught the robot to distinguish three pairs of objects outside. Multiple scans are conducted at different distances. A two-step feature extraction is conducted based on the amplitude vs. scan angle plots. The final Slope1 vs. Slope2 phase plane not only separates the rectangular objects from the corresponding cylindrical

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Investigation on the mobile robot navigation in an unknown environment

    Get PDF
    Mobile robots could be used to search, find, and relocate objects in many types of manufacturing operations and environments. In this scenario, the target objects might reside with equal probability at any location in the environment and, therefore, the robot must navigate and search the whole area autonomously, and be equipped with specific sensors to detect objects. Novel challenges exist in developing a control system, which helps a mobile robot achieve such tasks, including constructing enhanced systems for navigation, and vision-based object recognition. The latter is important for undertaking the exploration task that requires an optimal object recognition technique. In this thesis, these challenges, for an indoor environment, were divided into three sub-problems. In the first, the navigation task involved discovering an appropriate exploration path for the entire environment, with minimal sensing requirements. The Bug algorithm strategies were adapted for modelling the environment and implementing the exploration path. The second was a visual-search process, which consisted of employing appropriate image-processing techniques, and choosing a suitable viewpoint field for the camera. This study placed more emphasis on colour segmentation, template matching and Speeded-Up Robust Features (SURF) for object detection. The third problem was the relocating process, which involved using a robot’s gripper to grasp the detected, desired object and then move it to the assigned, final location. This also included approaching both the target and the delivery site, using a visual tracking technique. All codes were developed using C++ and C programming, and some libraries that included OpenCV and OpenSURF were utilized for image processing. Each control system function was tested both separately, and then in combination as a whole control program. The system performance was evaluated using two types of mobile robots: legged and wheeled. In this study, it was necessary to develop a wheeled search robot with a high performance processor. The experimental results demonstrated that the methodology used for the search robots was highly efficient provided the processor was adequate. It was concluded that it is possible to implement a navigation system within a minimum number of sensors if they are located and used effectively on the robot’s body. The main challenge within a visual-search process is that the environmental conditions are difficult to control, because the search robot executes its tasks in dynamic environments. The additional challenges of scaling these small robots up to useful industrial capabilities were also explored

    Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    Get PDF
    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results
    • …
    corecore