12,100 research outputs found
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network
Accurate lane localization and lane change detection are crucial in advanced
driver assistance systems and autonomous driving systems for safer and more
efficient trajectory planning. Conventional localization devices such as Global
Positioning System only provide road-level resolution for car navigation, which
is incompetent to assist in lane-level decision making. The state of art
technique for lane localization is to use Light Detection and Ranging sensors
to correct the global localization error and achieve centimeter-level accuracy,
but the real-time implementation and popularization for LiDAR is still limited
by its computational burden and current cost. As a cost-effective alternative,
vision-based lane change detection has been highly regarded for affordable
autonomous vehicles to support lane-level localization. A deep learning-based
computer vision system is developed to detect the lane change behavior using
the images captured by a front-view camera mounted on the vehicle and data from
the inertial measurement unit for highway driving. Testing results on
real-world driving data have shown that the proposed method is robust with
real-time working ability and could achieve around 87% lane change detection
accuracy. Compared to the average human reaction to visual stimuli, the
proposed computer vision system works 9 times faster, which makes it capable of
helping make life-saving decisions in time
The State-of-the-art of Coordinated Ramp Control with Mixed Traffic Conditions
Ramp metering, a traditional traffic control strategy for conventional
vehicles, has been widely deployed around the world since the 1960s. On the
other hand, the last decade has witnessed significant advances in connected and
automated vehicle (CAV) technology and its great potential for improving
safety, mobility and environmental sustainability. Therefore, a large amount of
research has been conducted on cooperative ramp merging for CAVs only. However,
it is expected that the phase of mixed traffic, namely the coexistence of both
human-driven vehicles and CAVs, would last for a long time. Since there is
little research on the system-wide ramp control with mixed traffic conditions,
the paper aims to close this gap by proposing an innovative system architecture
and reviewing the state-of-the-art studies on the key components of the
proposed system. These components include traffic state estimation, ramp
metering, driving behavior modeling, and coordination of CAVs. All reviewed
literature plot an extensive landscape for the proposed system-wide coordinated
ramp control with mixed traffic conditions.Comment: 8 pages, 1 figure, IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE
- ITSC 201
Lost in Time: Temporal Analytics for Long-Term Video Surveillance
Video surveillance is a well researched area of study with substantial work
done in the aspects of object detection, tracking and behavior analysis. With
the abundance of video data captured over a long period of time, we can
understand patterns in human behavior and scene dynamics through data-driven
temporal analytics. In this work, we propose two schemes to perform descriptive
and predictive analytics on long-term video surveillance data. We generate
heatmap and footmap visualizations to describe spatially pooled trajectory
patterns with respect to time and location. We also present two approaches for
anomaly prediction at the day-level granularity: a trajectory-based statistical
approach, and a time-series based approach. Experimentation with one year data
from a single camera demonstrates the ability to uncover interesting insights
about the scene and to predict anomalies reasonably well.Comment: To Appear in Springer LNE
Drunk Driving Legislation and Traffic Fatalities: What Works and What Doesn’t?
This paper re-examines the effectiveness of Blood Alcohol Content (BAC) and Administrative License Revocation (ALR) laws in reducing traffic fatalities. Using difference-in-differences estimators of U.S. state-level data with standard errors corrected for autocorrelation, we find no evidence that lowering BAC limits to 0.08 grams/decaliter has reduced fatality rates, either in total or in alcohol-related crashes. On the other hand, ALR is found to be an effective in reducing fatalities in all specifications. Endogeneity tests using event analyses indicate temporal causality of ALR laws.
SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for Autonomous Driving
Autonomous driving confronts great challenges in complex traffic scenarios,
where the risk of Safety of the Intended Functionality (SOTIF) can be triggered
by the dynamic operational environment and system insufficiencies. The SOTIF
risk is reflected not only intuitively in the collision risk with objects
outside the autonomous vehicles (AVs), but also inherently in the performance
limitation risk of the implemented algorithms themselves. How to minimize the
SOTIF risk for autonomous driving is currently a critical, difficult, and
unresolved issue. Therefore, this paper proposes the "Self-Surveillance and
Self-Adaption System" as a systematic approach to online minimize the SOTIF
risk, which aims to provide a systematic solution for monitoring,
quantification, and mitigation of inherent and external risks. The core of this
system is the risk monitoring of the implemented artificial intelligence
algorithms within the AV. As a demonstration of the Self-Surveillance and
Self-Adaption System, the risk monitoring of the perception algorithm, i.e.,
YOLOv5 is highlighted. Moreover, the inherent perception algorithm risk and
external collision risk are jointly quantified via SOTIF entropy, which is then
propagated downstream to the decision-making module and mitigated. Finally,
several challenging scenarios are demonstrated, and the Hardware-in-the-Loop
experiments are conducted to verify the efficiency and effectiveness of the
system. The results demonstrate that the Self-Surveillance and Self-Adaption
System enables dependable online monitoring, quantification, and mitigation of
SOTIF risk in real-time critical traffic environments.Comment: 16 pages, 10 figures, 2 tables, submitted to IEEE TIT
- …