1,316 research outputs found

    Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges

    Full text link
    Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research. For a summary of the paper and more results, visit the website: autonomous-radars.github.io

    Sensormodelle zur Simulation der Umfelderfassung für Systeme des automatisierten Fahrens

    Get PDF
    The use of sensor models allows the simulation of environmental perception in automated driving systems, aiding in development and testing efforts. This work systematically discusses the different types of sensor models and introduces an architecture for statistics based as well as for physically motivated sensor models. Each approach is grounded in real world observations of sensor measurements and is designed for portability and the ease of further extensions.Die Nutzung von Sensormodellen für die Umfelderfassung ebnet den Weg für die simulationsgestützte Entwicklung von Systemen des automatisierten Fahrens. In dieser Arbeit wird eine Systematik für verschiedene Arten von Sensormodellen eingeführt und eine Umsetzung von statistischen sowie von physikalisch motivierten Modellen vorgestellt. Beide Ansätze basieren auf realen Sensormessdaten und zielen auf eine leichte Übertragbarkeit sowie die Möglichkeit der Erweiterung der Modelle für verschiedene Anwendungsbereiche

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Optimization of a Simultaneous Localization and Mapping (SLAM) System for an Autonomous Vehicle Using a 2-Dimensional Light Detection and Ranging Sensor (LiDAR) by Sensor Fusion

    Get PDF
    Fully autonomous vehicles must accurately estimate the extent of their environment as well as their relative location in their environment. A popular approach to organizing such information is creating a map of a given physical environment and defining a point in this map representing the vehicle’s location. Simultaneous Mapping and Localization (SLAM) is a computing algorithm that takes inputs from a Light Detection and Ranging (LiDAR) sensor to construct a map of the vehicle’s physical environment and determine its respective location in this map based on feature recognition simultaneously. Two fundamental requirements allow an accurate SLAM method: one being accurate distance measurements and the second being an accurate assessment of location. Researched are methods in which a 2D LiDAR sensor system with laser range finders, ultrasonic sensors and stereo camera vision is optimized for distance measurement accuracy, particularly a method using recurrent neural networks. Sensor fusion techniques with infrared, camera and ultrasonic sensors are implemented to investigate their effects on distance measurement accuracy. It was found that the use of a recurrent neural network for fusing data from a 2D LiDAR with laser range finders and ultrasonic sensors outperforms raw sensor data in accuracy (46.6% error reduced to 3.0% error) and precision (0.62m std. deviation reduced to 0.0015m std. deviation). These results demonstrate the effectiveness of machine learning based fusion algorithms for noise reduction, measurement accuracy improvement, and outlier measurement removal which would provide SLAM vehicles more robust performance

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    Evaluating Risk to People and Property for Aircraft Emergency Landing Planning

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/143122/1/1.I010513.pd

    Sensormodelle zur Simulation der Umfelderfassung für Systeme des automatisierten Fahrens

    Get PDF
    The use of sensor models allows the simulation of environmental perception in automated driving systems, aiding in development and testing efforts. This work systematically discusses the different types of sensor models and introduces an architecture for statistics based as well as for physically motivated sensor models. Each approach is grounded in real world observations of sensor measurements and is designed for portability and the ease of further extensions.Die Nutzung von Sensormodellen für die Umfelderfassung ebnet den Weg für die simulationsgestützte Entwicklung von Systemen des automatisierten Fahrens. In dieser Arbeit wird eine Systematik für verschiedene Arten von Sensormodellen eingeführt und eine Umsetzung von statistischen sowie von physikalisch motivierten Modellen vorgestellt. Beide Ansätze basieren auf realen Sensormessdaten und zielen auf eine leichte Übertragbarkeit sowie die Möglichkeit der Erweiterung der Modelle für verschiedene Anwendungsbereiche
    corecore