4,292 research outputs found

    Fully Convolutional Neural Networks for Dynamic Object Detection in Grid Maps

    Full text link
    Grid maps are widely used in robotics to represent obstacles in the environment and differentiating dynamic objects from static infrastructure is essential for many practical applications. In this work, we present a methods that uses a deep convolutional neural network (CNN) to infer whether grid cells are covering a moving object or not. Compared to tracking approaches, that use e.g. a particle filter to estimate grid cell velocities and then make a decision for individual grid cells based on this estimate, our approach uses the entire grid map as input image for a CNN that inspects a larger area around each cell and thus takes the structural appearance in the grid map into account to make a decision. Compared to our reference method, our concept yields a performance increase from 83.9% to 97.2%. A runtime optimized version of our approach yields similar improvements with an execution time of just 10 milliseconds.Comment: This is a shorter version of the masters thesis of Florian Piewak and it was accapted at IV 201

    Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges

    Full text link
    Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research. For a summary of the paper and more results, visit the website: autonomous-radars.github.io

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Human Being Detection from UWB NLOS Signals: Accuracy and Generality of Advanced Machine Learning Models

    Get PDF
    This paper studies the problem of detecting human beings in non-line-of-sight (NLOS) conditions using an ultra-wideband radar. We perform an extensive measurement campaign in realistic environments, considering different body orientations, the obstacles’ materials, and radar– obstacle distances. We examine two main scenarios according to the radar position: (i) placed on top of a mobile cart; (ii) handheld at different heights. We empirically analyze and compare several input representations and machine learning (ML) methods—supervised and unsupervised, symbolic and non-symbolic—according to both their accuracy in detecting NLOS human beings and their adaptability to unseen cases. Our study proves the effectiveness and flexibility of modern ML techniques, avoiding environment-specific configurations and benefiting from knowledge transference. Unlike traditional TLC approaches, ML allows for generalization, overcoming limits due to unknown or only partially known observation models and insufficient labeled data, which usually occur in emergencies or in the presence of time/cost constraints

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node
    • …
    corecore