2,312 research outputs found

    Pedestrian Validation in Infrared Images by Means of Active Contours and Neural Networks

    Get PDF
    This paper presents two different modules for the validation of human shape presence in far-infrared images. These modules are part of a more complex system aimed at the detection of pedestrians by means of the simultaneous use of two stereo vision systems in both far-infrared and daylight domains. The first module detects the presence of a human shape in a list of areas of attention using active contours to detect the object shape and evaluating the results by means of a neural network. The second validation subsystem directly exploits a neural network for each area of attention in the far-infrared images and produces a list of votes

    Uncertainty Estimation in One-Stage Object Detection

    Full text link
    Environment perception is the task for intelligent vehicles on which all subsequent steps rely. A key part of perception is to safely detect other road users such as vehicles, pedestrians, and cyclists. With modern deep learning techniques huge progress was made over the last years in this field. However such deep learning based object detection models cannot predict how certain they are in their predictions, potentially hampering the performance of later steps such as tracking or sensor fusion. We present a viable approaches to estimate uncertainty in an one-stage object detector, while improving the detection performance of the baseline approach. The proposed model is evaluated on a large scale automotive pedestrian dataset. Experimental results show that the uncertainty outputted by our system is coupled with detection accuracy and the occlusion level of pedestrians

    RC-BEVFusion: A Plug-In Module for Radar-Camera Bird's Eye View Feature Fusion

    Full text link
    Radars and cameras belong to the most frequently used sensors for advanced driver assistance systems and automated driving research. However, there has been surprisingly little research on radar-camera fusion with neural networks. One of the reasons is a lack of large-scale automotive datasets with radar and unmasked camera data, with the exception of the nuScenes dataset. Another reason is the difficulty of effectively fusing the sparse radar point cloud on the bird's eye view (BEV) plane with the dense images on the perspective plane. The recent trend of camera-based 3D object detection using BEV features has enabled a new type of fusion, which is better suited for radars. In this work, we present RC-BEVFusion, a modular radar-camera fusion network on the BEV plane. We propose BEVFeatureNet, a novel radar encoder branch, and show that it can be incorporated into several state-of-the-art camera-based architectures. We show significant performance gains of up to 28% increase in the nuScenes detection score, which is an important step in radar-camera fusion research. Without tuning our model for the nuScenes benchmark, we achieve the best result among all published methods in the radar-camera fusion category.Comment: GCPR 202

    A Simulation Environment with Reduced Reality Gap for Testing Autonomous Vehicles

    Get PDF
    In order to facilitate acceptance and ensure safety, autonomous vehicles must be tested not only in typical and relatively safe scenarios but also in dangerous and less frequent scenarios. Recent pedestrian fatalities caused by test vehicles of the front-running giants like Google and Tesla suffice the fact that Autonomous Vehicle technology is not yet mature enough and still needs rigorous exposure to a wide range of traffic, landscape, and natural conditions on which the Autonomous Vehicles can be trained on to perform as expected in real traffic conditions. Simulation Environments have been considered as an efficient, safe, flexible and cost-effective option for the training, testing, and validation of Autonomous Vehicle technology. While ad-hoc task-specific use of simulation in Autonomous Driving research is widespread, simulation platforms that bridge the gap between simulation and reality are limited. This research proposes to set up a highly realistic simulation environment (using CARLA driving simulator) to generate realistic data to be used for Autonomous Driving research. Our system is able to recreate the original traffic scenarios based on prior information about the traffic scene. Furthermore, the system will allow to make changes to the original scenarios and create various desired testing scenarios by varying the parameters of traffic actors, such as location, trajectory, speed, motion states, etc. and hence collect more data with ease
    • …
    corecore