1,486 research outputs found

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Multi-Camera Visual-Inertial Simultaneous Localization and Mapping for Autonomous Valet Parking

    Full text link
    Localization and mapping are key capabilities for self-driving vehicles. In this paper, we build on Kimera and extend it to use multiple cameras as well as external (eg wheel) odometry sensors, to obtain accurate and robust odometry estimates in real-world problems. Additionally, we propose an effective scheme for closing loops that circumvents the drawbacks of common alternatives based on the Perspective-n-Point method and also works with a single monocular camera. Finally, we develop a method for dense 3D mapping of the free space that combines a segmentation network for free-space detection with a homography-based dense mapping technique. We test our system on photo-realistic simulations and on several real datasets collected on a car prototype developed by the Ford Motor Company, spanning both indoor and outdoor parking scenarios. Our multi-camera system is shown to outperform state-of-the art open-source visual-inertial-SLAM pipelines (Vins-Fusion, ORB-SLAM3), and exhibits an average trajectory error under 1% of the trajectory length across more than 8km of distance traveled (combined across all datasets). A video showcasing the system is available at: youtu.be/H8CpzDpXOI8

    Present and Future of SLAM in Extreme Underground Environments

    Full text link
    This paper reports on the state of the art in underground SLAM by discussing different SLAM strategies and results across six teams that participated in the three-year-long SubT competition. In particular, the paper has four main goals. First, we review the algorithms, architectures, and systems adopted by the teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to approach for virtually all teams in the competition), heterogeneous multi-robot operation (including both aerial and ground robots), and real-world underground operation (from the presence of obscurants to the need to handle tight computational constraints). We do not shy away from discussing the dirty details behind the different SubT SLAM systems, which are often omitted from technical papers. Second, we discuss the maturity of the field by highlighting what is possible with the current SLAM systems and what we believe is within reach with some good systems engineering. Third, we outline what we believe are fundamental open problems, that are likely to require further research to break through. Finally, we provide a list of open-source SLAM implementations and datasets that have been produced during the SubT challenge and related efforts, and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE Transactions on Robotics for pre-approva

    LIDAR obstacle warning and avoidance system for unmanned aerial vehicle sense-and-avoid

    Get PDF
    The demand for reliable obstacle warning and avoidance capabilities to ensure safe low-level flight operations has led to the development of various practical systems suitable for fixed and rotary wing aircraft. State-of-the-art Light Detection and Ranging (LIDAR) technology employing eye-safe laser sources, advanced electro-optics and mechanical beam-steering components delivers the highest angular resolution and accuracy performances in a wide range of operational conditions. LIDAR Obstacle Warning and Avoidance System (LOWAS) is thus becoming a mature technology with several potential applications to manned and unmanned aircraft. This paper addresses specifically its employment in Unmanned Aircraft Systems (UAS) Sense-and-Avoid (SAA). Small-to-medium size Unmanned Aerial Vehicles (UAVs) are particularly targeted since they are very frequently operated in proximity of the ground and the possibility of a collision is further aggravated by the very limited see-and-avoid capabilities of the remote pilot. After a brief description of the system architecture, mathematical models and algorithms for avoidance trajectory generation are provided. Key aspects of the Human Machine Interface and Interaction (HMI2) design for the UAS obstacle avoidance system are also addressed. Additionally, a comprehensive simulation case study of the avoidance trajectory generation algorithms is presented. It is concluded that LOWAS obstacle detection and trajectory optimisation algorithms can ensure a safe avoidance of all classes of obstacles (i.e., wire, extended and point objects) in a wide range of weather and geometric conditions, providing a pathway for possible integration of this technology into future UAS SAA architectures

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    A Multi-Sensor Fusion-Based Underwater Slam System

    Get PDF
    This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map. The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions
    • …
    corecore