683 research outputs found

    Multi Sensor Multi Target Perception and Tracking for Informed Decisions in Public Road Scenarios

    Get PDF
    Multi-target tracking in public traffic calls for a tracking system with automated track initiation and termination facilities in a randomly evolving driving environment. Besides, the key problem of data association needs to be handled effectively considering the limitations in the computational resources on-board an autonomous car. The challenge of the tracking problem is further evident in the use of high-resolution automotive sensors which return multiple detections per object. Furthermore, it is customary to use multiple sensors that cover different and/or over-lapping Field of View and fuse sensor detections to provide robust and reliable tracking. As a consequence, in high-resolution multi-sensor settings, the data association uncertainty, and the corresponding tracking complexity increases pointing to a systematic approach to handle and process sensor detections. In this work, we present a multi-target tracking system that addresses target birth/initiation and death/termination processes with automatic track management features. These tracking functionalities can help facilitate perception during common events in public traffic as participants (suddenly) change lanes, navigate intersections, overtake and/or brake in emergencies, etc. Various tracking approaches including the ones based on joint integrated probability data association (JIPDA) filter, Linear Multi-target Integrated Probabilistic Data Association (LMIPDA) Filter, and their multi-detection variants are adapted to specifically include algorithms that handle track initiation and termination, clutter density estimation and track management. The utility of the filtering module is further elaborated by integrating it into a trajectory tracking problem based on model predictive control. To cope with tracking complexity in the case of multiple high-resolution sensors, we propose a hybrid scheme that combines the approaches of data clustering at the local sensor and multiple detections tracking schemes at the fusion layer. We implement a track-to-track fusion scheme that de-correlates local (sensor) tracks to avoid double counting and apply a measurement partitioning scheme to re-purpose the LMIPDA tracking algorithm to multi-detection cases. In addition to the measurement partitioning approach, a joint extent and kinematic state estimation scheme are integrated into the LMIPDA approach to facilitate perception and tracking of an individual as well as group targets as applied to multi-lane public traffic. We formulate the tracking problem as a two hierarchical layer. This arrangement enhances the multi-target tracking performance in situations including but not limited to target initialization(birth process), target occlusion, missed detections, unresolved measurement, target maneuver, etc. Also, target groups expose complex individual target interactions to help in situation assessment which is challenging to capture otherwise. The simulation studies are complemented by experimental studies performed on single and multiple (group) targets. Target detections are collected from a high-resolution radar at a frequency of 20Hz; whereas RTK-GPS data is made available as ground truth for one of the target vehicle\u27s trajectory

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Fusion of Video and Multi-Waveform FMCW Radar for Traffic Surveillance

    Get PDF
    Modern frequency modulated continuous wave (FMCW) radar technology provides the ability to modify the system transmission frequency as a function of time, which in turn provides the ability to generate multiple output waveforms from a single radar unit. Current low-power multi-waveform FMCW radar techniques lack the ability to reliably associate measurements from the various waveform sections in the presence of multiple targets and multiple false detections within the field-of-view. Two approaches are developed here to address this problem. The first approach takes advantage of the relationships between the waveform segments to generate a weighting function for candidate combinations of measurements from the waveform sections. This weighting function is then used to choose the best candidate combinations to form polar-coordinate measurements. Simulations show that this approach provides a ten to twenty percent increase in the probability of correct association over the current approach while reducing the number of false alarms in generated in the process, but still fails to form a measurement if a detection form a waveform section is missing. The second approach models the multi-waveform FMCW radar as a set of independent sensors and uses distributed data fusion to fuse estimates from those individual sensors within a tracking structure. Tracking in this approach is performed directly with the raw frequency and angle measurements from the waveform segments. This removes the need for data association between the measurements from the individual waveform segments. A distributed data fusion model is used again to modify the radar tracking systems to include a video sensor to provide additional angular and identification information into the system. The combination of the radar and vision sensors, as an end result, provides an enhanced roadside tracking system

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    Probabilistic Lane Association

    Get PDF
    Lane association is the problem of determining in which lane a vehicle is currently driving, which is of interest for automated driving where the vehicle must understand its surroundings. Limited to highway scenarios, a method combining data from different sensors to extract information about the currently associated lane is presented. The suggested method splits the problem in two main parts, lane change identification and road edge detection. The lane change identification mainly uses information from the camera to model the lateral movement on the road and identifies the lane changes as a relative position on the road. This part is implemented with a particle filter. The road edge detection enters radar detections to an iterated Kalman filter and estimates the distances to the road edges. Finally, a combination of the filter outputs makes it possible to compute an absolute position on the road. Comparing the relative and absolute positioning then leads to the desired lane association estimate. The results produced are reliable and encourages to continue approaching this problem in a similar manner, but the current implementation is computationally heavy

    Sensor fusion methodology for vehicle detection

    Get PDF
    A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems
    • …
    corecore