700 research outputs found

    Behavioral pedestrian tracking using a camera and lidar sensors on a moving vehicle

    Get PDF
    In this paper, we present a novel 2D–3D pedestrian tracker designed for applications in autonomous vehicles. The system operates on a tracking by detection principle and can track multiple pedestrians in complex urban traffic situations. By using a behavioral motion model and a non-parametric distribution as state model, we are able to accurately track unpredictable pedestrian motion in the presence of heavy occlusion. Tracking is performed independently, on the image and ground plane, in global, motion compensated coordinates. We employ Camera and LiDAR data fusion to solve the association problem where the optimal solution is found by matching 2D and 3D detections to tracks using a joint log-likelihood observation model. Each 2D–3D particle filter then updates their state from associated observations and a behavioral motion model. Each particle moves independently following the pedestrian motion parameters which we learned offline from an annotated training dataset. Temporal stability of the state variables is achieved by modeling each track as a Markov Decision Process with probabilistic state transition properties. A novel track management system then handles high level actions such as track creation, deletion and interaction. Using a probabilistic track score the track manager can cull false and ambiguous detections while updating tracks with detections from actual pedestrians. Our system is implemented on a GPU and exploits the massively parallelizable nature of particle filters. Due to the Markovian nature of our track representation, the system achieves real-time performance operating with a minimal memory footprint. Exhaustive and independent evaluation of our tracker was performed by the KITTI benchmark server, where it was tested against a wide variety of unknown pedestrian tracking situations. On this realistic benchmark, we outperform all published pedestrian trackers in a multitude of tracking metrics

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Simultaneous fusion, classification, andtraction of moving obstacles by LIDAR and camera using Bayesian algorithm

    Get PDF
    In the near future, preventing collisions with fixed or moving, alive, and inanimate obstacles will appear to be a severe challenge due to the increased use of Unmanned Ground Vehicles (UGVs). Light Detection and Ranging (LIDAR) sensors and cameras are usually used in UGV to detect obstacles. The definite tracing and classification of moving obstacles is a significant dimension in developed driver assistance systems. It is believed that the perceived model of the situation can be improved by incorporating the obstacle classification. The present study indicated a multi-hypotheses monitoring and classifying approach, which allows solving ambiguities rising with the last methods of associating and classifying targets and tracks in a highly volatile vehicular situation. This method was tested through real data from various driving scenarios and focusing on two obstacles of interest vehicle, pedestrian.In the near future, preventing collisions with fixed or moving, alive, and inanimate obstacles will appear to be a severe challenge due to the increased use of Unmanned Ground Vehicles (UGVs). Light Detection and Ranging (LIDAR) sensors and cameras are usually used in UGV to detect obstacles. The definite tracing and classification of moving obstacles is a significant dimension in developed driver assistance systems. It is believed that the perceived model of the situation can be improved by incorporating the obstacle classification. The present study indicated a multi-hypotheses monitoring and classifying approach, which allows solving ambiguities rising with the last methods of associating and classifying targets and tracks in a highly volatile vehicular situation. This method was tested through real data from various driving scenarios and focusing on two obstacles of interest vehicle, pedestrian

    Pedestrian Behavior Study to Advance Pedestrian Safety in Smart Transportation Systems Using Innovative LiDAR Sensors

    Get PDF
    Pedestrian safety is critical to improving walkability in cities. Although walking trips have increased in the last decade, pedestrian safety remains a top concern. In 2020, 6,516 pedestrians were killed in traffic crashes, representing the most deaths since 1990 (NHTSA, 2020). Approximately 15% of these occurred at signalized intersections where a variety of modes converge, leading to the increased propensity of conflicts. Current signal timing and detection technologies are heavily biased towards vehicular traffic, often leading to higher delays and insufficient walk times for pedestrians, which could result in risky behaviors such as noncompliance. Current detection systems for pedestrians at signalized intersections consist primarily of push buttons. Limitations include the inability to provide feedback to the pedestrian that they have been detected, especially with older devices, and not being able to dynamically extend the walk times if the pedestrians fail to clear the crosswalk. Smart transportation systems play a vital role in enhancing mobility and safety and provide innovative techniques to connect pedestrians, vehicles, and infrastructure. Most research on smart and connected technologies is focused on vehicles; however, there is a critical need to harness the power of these technologies to study pedestrian behavior, as pedestrians are the most vulnerable users of the transportation system. While a few studies have used location technologies to detect pedestrians, this coverage is usually small and favors people with smartphones. However, the transportation system must consider a full spectrum of pedestrians and accommodate everyone. In this research, the investigators first review the previous studies on pedestrian behavior data and sensing technologies. Then the research team developed a pedestrian behavioral data collecting system based on the emerging LiDAR sensors. The system was deployed at two signalized intersections. Two studies were conducted: (a) pedestrian behaviors study at signalized intersections, analyzing the pedestrian waiting time before crossing, generalized perception-reaction time to WALK sign and crossing speed; and (b) a novel dynamic flashing yellow arrow (D-FYA) solution to separate permissive left-turn vehicles from concurrent crossing pedestrians. The results reveal that the pedestrian behaviors may have evolved compared with the recommended behaviors in the pedestrian facility design guideline (e.g., AASHTO’s “Green Book”). The D-FYA solution was also evaluated on the cabinet-in-theloop simulation platform and the improvements were promising. The findings in this study will advance the body of knowledge on equitable traffic safety, especially for pedestrian safety in the future

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)
    • …
    corecore