14 research outputs found

    OREOS: Oriented Recognition of 3D Point Clouds in Outdoor Scenarios

    Full text link
    We introduce a novel method for oriented place recognition with 3D LiDAR scans. A Convolutional Neural Network is trained to extract compact descriptors from single 3D LiDAR scans. These can be used both to retrieve near-by place candidates from a map, and to estimate the yaw discrepancy needed for bootstrapping local registration methods. We employ a triplet loss function for training and use a hard-negative mining strategy to further increase the performance of our descriptor extractor. In an evaluation on the NCLT and KITTI datasets, we demonstrate that our method outperforms related state-of-the-art approaches based on both data-driven and handcrafted data representation in challenging long-term outdoor conditions

    Massive Arteriovenous Malformation with Stroke-Like Presentation

    Get PDF
    We report of a 75-year-old patient with stroke-like presentation, where cerebral imaging led to the diagnosis of a massive arteriovenous malformation (AVM) of the whole left hemisphere. We suggest considering AVM as a differential diagnosis in patients with symptoms of acute stroke despite age and, in the absence of contraindications, in this setting to obtain MRI or CT angiography of the brain

    Efficient Visual Localization for Ground Vehicles in Outdoor Environments

    No full text
    Visual (self)localization enables Autonomous Ground Vehicles (AGVs) to assess their position and orientation within an environment with up to centimeter level accuracy, using only cost-effective camera sensors. Especially for high precision maneuvering in GNSS-denied environments, using cameras for localization may be the best suited option for budget- or weight constrained platforms. However, particularly in outdoor environments, camera images are subject to various forms of appearance change. This renders it challenging to reliably localize a vehicle against a map previously built from sensor data recorded under different appearance conditions. A powerful approach to deal with these appearance changes is to enhance the map with visual data from several recordings, each collected under different appearance conditions. The amount of data generated following this approach, however, scales with the number of recordings collected over time, and thus unveils a need for smart algorithms managing this data and ensuring efficient use of computation, storage and network bandwidth resources. The contributions of this thesis are centered around the research questions addressing this need for a resource-efficient and reliable visual localization system for AGVs in outdoor environments. In Part A, we propose an algorithm to dynamically select small amounts of map data matching the current appearance condition, thereby lowering network bandwidth consumption, and reducing computational demands on the vehicle platforms. We show that exploiting co-observability statistics allows for performing this appearance-based map data selection in a highly effective manner, without the need to explicitly model or enumerate the different appearance conditions. Part B is devoted to the development of a practical map management process for a visual localization system targeted at long-term use. Our experiments have revealed that multi-session maps converge to a relatively stable state after several months of collecting recordings under varying appearance conditions. Furthermore, through a tight integration of appearance-based map data selection with offline map summerization, a completely scalable visual localization and mapping framework is reached that can be used for indefinite periods of time. In Part C, we present the visual localization system developed within the UP-Drive project 1 for autonomous cars in urban outdoor environments. Thereby, a special focus has been placed on robustness against outdoor and long-term ap- pearance change, and on a careful evaluation of the localization accuracy. We demonstrate that reliable and accurate visual localization is feasible in structured outdoor environments, even over long time spans, across vastly different seasonal, weather, and lighting conditions including at night-time, and with local point features with binary descriptors on a CPU-only computer architecture

    MOZARD: Multi-Modal Localization for Autonomous Vehicles in Urban Outdoor Environments

    No full text
    Visually poor scenarios are one of the main sources of failure in visual localization systems in outdoor environments. To address this challenge, we present MOZARD, a multi-modal localization system for urban outdoor environments using vision and LiDAR. By fusing key point based visual multi-session information with semantic data, an improved localization recall can be achieved across vastly different appearance conditions. In particular we focus on the use of curbstone information because of their broad distribution and reliability within urban environments. We present thorough experimental evaluations on several driving kilometers in challenging urban outdoor environments, analyze the recall and accuracy of our localization system and demonstrate in a case study possible failure cases of each subsystem. We demonstrate that MOZARD is able to bridge scenarios where our previous key point based visual approach, VIZARD, fails, hence yielding an increased recall performance, while a similar localization accuracy of 0.2m is achieved. © 2020 IEEE
    corecore