4,866 research outputs found

    From Monocular SLAM to Autonomous Drone Exploration

    Full text link
    Micro aerial vehicles (MAVs) are strongly limited in their payload and power capacity. In order to implement autonomous navigation, algorithms are therefore desirable that use sensory equipment that is as small, low-weight, and low-power consuming as possible. In this paper, we propose a method for autonomous MAV navigation and exploration using a low-cost consumer-grade quadrocopter equipped with a monocular camera. Our vision-based navigation system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense reconstruction of the environment in real-time. Since LSD-SLAM only determines depth at high gradient pixels, texture-less areas are not directly observed so that previous exploration methods that assume dense map information cannot directly be applied. We propose an obstacle mapping and exploration approach that takes the properties of our semi-dense monocular SLAM system into account. In experiments, we demonstrate our vision-based autonomous navigation and exploration system with a Parrot Bebop MAV

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    LookUP: Vision-Only Real-Time Precise Underground Localisation for Autonomous Mining Vehicles

    Full text link
    A key capability for autonomous underground mining vehicles is real-time accurate localisation. While significant progress has been made, currently deployed systems have several limitations ranging from dependence on costly additional infrastructure to failure of both visual and range sensor-based techniques in highly aliased or visually challenging environments. In our previous work, we presented a lightweight coarse vision-based localisation system that could map and then localise to within a few metres in an underground mining environment. However, this level of precision is insufficient for providing a cheaper, more reliable vision-based automation alternative to current range sensor-based systems. Here we present a new precision localisation system dubbed "LookUP", which learns a neural-network-based pixel sampling strategy for estimating homographies based on ceiling-facing cameras without requiring any manual labelling. This new system runs in real time on limited computation resource and is demonstrated on two different underground mine sites, achieving real time performance at ~5 frames per second and a much improved average localisation error of ~1.2 metre.Comment: 7 pages, 7 figures, accepted for IEEE ICRA 201

    End-to-End Tracking and Semantic Segmentation Using Recurrent Neural Networks

    Full text link
    In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    Pedestrian Detection using Triple Laser Range Finders

    Get PDF
    Pedestrian detection is one of the important features in autonomous ground vehicle (AGV). It ensures the capability for safety navigation in urban environment. Therefore, the detection accuracy became a crucial part which leads to implementation using Laser Range Finder (LRF) for better data representation. In this study, an improved laser configuration and fusion technique is introduced by implementation of triple LRFs in two layers with Pedestrian Data Analysis (PDA) to recognize multiple pedestrians. The PDA integrates various features from feature extraction process for all clusters and fusion of multiple layers for better recognition. The experiments were conducted in various occlusion scenarios such as intersection, closed-pedestrian and combine scenarios. The analysis of the laser fusion and PDA for all scenarios showed an improvement of detection where the pedestrians were represented by various detection categories which solve occlusion issues when low numberof laser data were obtained

    Environment perception based on LIDAR sensors for real road applications

    Get PDF
    The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The work reported in this paper has been partly funded by the Spanish Ministry of Science and Innovation (TRA2007- 67786-C02-01, TRA2007-67786-C02-02, and TRA2009- 07505) and the CAM project SEGVAUTO-II.Publicad

    Object Detection, Classification, and Tracking for Autonomous Vehicle

    Get PDF
    The detection and tracking of objects around an autonomous vehicle is essential to operate safely. This paper presents an algorithm to detect, classify, and track objects. All objects are classified as moving or stationary as well as by type (e.g. vehicle, pedestrian, or other). The proposed approach uses state of the art deep-learning network YOLO (You Only Look Once) combined with data from a laser scanner to detect and classify the objects and estimate the position of objects around the car. The Oriented FAST and Rotated BRIEF (ORB) feature descriptor is used to match the same object from one image frame to another. This information fused with measurements from a coupled GPS/INS using an Extended Kalman Filter. The resultant solution aids in the localization of the car itself and the objects within its environment so that it can safely navigate the roads autonomously. The algorithm has been developed and tested using the dataset collected by Oxford Robotcar. The Robotcar is equipped with cameras, LiDAR, GPS and INS collected data traversing a route through the crowded urban environment of central Oxford
    • …
    corecore