991 research outputs found

    Object Perception for Intelligent Vehicle Applications: A Multi-Sensor Fusion Approach

    Get PDF
    International audienceThe paper addresses the problem of object perception for intelligent vehicle applications with main tasks of detection, tracking and classification of obstacles where multiple sensors (i.e.: lidar, camera and radar) are used. New algorithms for raw sensor data processing and sensor data fusion are introduced making the most information from all sensors in order to provide a more reliable and accurate information about objects in the vehicle environment. The proposed object perception module is implemented and tested on a demonstrator car in real-life traffics and evaluation results are presented

    Fusion Framework for Moving-Object Classification

    No full text
    International audiencePerceiving the environment is a fundamental task for Advance Driver Assistant Systems. While simultaneous localization and mapping represents the static part of the environment, detection and tracking of moving objects aims at identifying the dynamic part. Knowing the class of the moving objects surrounding the vehicle is a very useful information to correctly reason, decide and act according to each class of object, e.g. car, truck, pedestrian, bike, etc. Active and passive sensors provide useful information to classify certain kind of objects, but perform poorly for others. In this paper we present a generic fusion framework based on Dempster-Shafer theory to represent and combine evidence from several sources. We apply the proposed method to the problem of moving object classification. The method combines information from several lists of moving objects provided by different sensor-based object detectors. The fusion approach includes uncertainty from the reliability of the sensors and their precision to classify specific types of objects. The proposed approach takes into account the instantaneous information at current time and combines it with fused information from previous times. Several experiments were conducted in highway and urban scenarios using a vehicle demonstrator from the interactIVe European project. The obtained results show improvements in the combined classification compared with individual class hypothesis from the individual detector modules

    GM-PHD Filter Based Sensor Data Fusion for Automotive Frontal Perception System

    Get PDF
    Advanced driver assistance systems and highly automated driving functions require an enhanced frontal perception system. The requirements of a frontal environment perception system cannot be satisfied by either of the existing automotive sensors. A commonly used sensor cluster for these functions consists of a mono-vision smart camera and automotive radar. The sensor fusion is intended to combine the data of these sensors to perform a robust environment perception. Multi-object tracking algorithms have a suitable software architecture for sensor data fusion. Several multi-object tracking algorithms, such as JPDAF or MHT, have good tracking performance; however, the computational requirements of these algorithms are significant according to their combinatorial complexity. The GM-PHD filter is a straightforward algorithm with favorable runtime characteristics that can track an unknown and timevarying number of objects. However, the conventional GM-PHD\ud filter has a poor performance in object cardinality estimation. This paper proposes a method that extends the GM-PHD filter with an object birth model that relies on the sensor detections and a robust object extraction module, including Bayesian estimation of objects’ existence probability to compensate for drawbacks of the conventional algorithm

    Radar Guided Dynamic Visual Attention for Resource-Efficient RGB Object Detection

    Full text link
    An autonomous system's perception engine must provide an accurate understanding of the environment for it to make decisions. Deep learning based object detection networks experience degradation in the performance and robustness for small and far away objects due to a reduction in object's feature map as we move to higher layers of the network. In this work, we propose a novel radar-guided spatial attention for RGB images to improve the perception quality of autonomous vehicles operating in a dynamic environment. In particular, our method improves the perception of small and long range objects, which are often not detected by the object detectors in RGB mode. The proposed method consists of two RGB object detectors, namely the Primary detector and a lightweight Secondary detector. The primary detector takes a full RGB image and generates primary detections. Next, the radar proposal framework creates regions of interest (ROIs) for object proposals by projecting the radar point cloud onto the 2D RGB image. These ROIs are cropped and fed to the secondary detector to generate secondary detections which are then fused with the primary detections via non-maximum suppression. This method helps in recovering the small objects by preserving the object's spatial features through an increase in their receptive field. We evaluate our fusion method on the challenging nuScenes dataset and show that our fusion method with SSD-lite as primary and secondary detector improves the baseline primary yolov3 detector's recall by 14% while requiring three times fewer computational resources.Comment: Accepted in International Joint Conference on Neural Networks (IJCNN) 202

    Monovision-based vehicle detection, distance and relative speed measurement in urban traffic

    Get PDF
    This study presents a monovision-based system for on-road vehicle detection and computation of distance and relative speed in urban traffic. Many works have dealt with monovision vehicle detection, but only a few of them provide the distance to the vehicle which is essential for the control of an intelligent transportation system. The system proposed integrates a single camera reducing the monetary cost of stereovision and RADAR-based technologies. The algorithm is divided in three major stages. For vehicle detection, the authors use a combination of two features: the shadow underneath the vehicle and horizontal edges. They propose a new method for shadow thresholding based on the grey-scale histogram assessment of a region of interest on the road. In the second and third stages, the vehicle hypothesis verification and the distance are obtained by means of its number plate whose dimensions and shape are standardised in each country. The analysis of consecutive frames is employed to calculate the relative speed of the vehicle detected. Experimental results showed excellent performance in both vehicle and number plate detections and in the distance measurement, in terms of accuracy and robustness in complex traffic scenarios and under different lighting conditions

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Collision Avoidance Using Deep Learning-Based Monocular Vision

    Get PDF
    Autonomous driving technologies, including monocular vision-based approaches, are in the forefront of industrial and research communities, since they are expected to have a significant impact on economy and society. However, they have limitations in terms of crash avoidance because of the rarity of labeled data for collisions in everyday traffic, as well as due to the complexity of driving situations. In this work, we propose a simple method based solely on monocular vision to overcome the data scarcity problem and to promote forward collision avoidance systems. We exploit state-of-the-art deep learning-based optical flow and monocular depth estimation methods, as well as object detection to estimate the speed of the ego-vehicle and to identify the lead vehicle, respectively. The proposed method utilizes car stop situations as collision surrogates to obtain data for time to collision estimation. We evaluate this approach on our own driving videos, collected using a spherical camera and smart glasses. Our results indicate that similar accuracy can be achieved on both video sources: the external road view from the car’s, and the ego-centric view from the driver’s perspective. Additionally, we set forth the possibility of using spherical cameras as opposed to traditional cameras for vision-based automotive sensing

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account
    corecore