60 research outputs found
Spatiotemporal Learning of Multivehicle Interaction Patterns in Lane-Change Scenarios
Interpretation of common-yet-challenging interaction scenarios can benefit
well-founded decisions for autonomous vehicles. Previous research achieved this
using their prior knowledge of specific scenarios with predefined models,
limiting their adaptive capabilities. This paper describes a Bayesian
nonparametric approach that leverages continuous (i.e., Gaussian processes) and
discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying
interaction patterns of the ego vehicle with other nearby vehicles. Our model
relaxes dependency on the number of surrounding vehicles by developing an
acceleration-sensitive velocity field based on Gaussian processes. The
experiment results demonstrate that the velocity field can represent the
spatial interactions between the ego vehicle and its surroundings. Then, a
discrete Bayesian nonparametric model, integrating Dirichlet processes and
hidden Markov models, is developed to learn the interaction patterns over the
temporal space by segmenting and clustering the sequential interaction data
into interpretable granular patterns automatically. We then evaluate our
approach in the highway lane-change scenarios using the highD dataset collected
from real-world settings. Results demonstrate that our proposed Bayesian
nonparametric approach provides an insight into the complicated lane-change
interactions of the ego vehicle with multiple surrounding traffic participants
based on the interpretable interaction patterns and their transition properties
in temporal relationships. Our proposed approach sheds light on efficiently
analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian
interactions. View the demos via https://youtu.be/z_vf9UHtdAM.Comment: for the supplements, see
https://chengyuan-zhang.github.io/Multivehicle-Interaction
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Imitating Driver Behavior with Generative Adversarial Networks
The ability to accurately predict and simulate human driving behavior is
critical for the development of intelligent transportation systems. Traditional
modeling methods have employed simple parametric models and behavioral cloning.
This paper adopts a method for overcoming the problem of cascading errors
inherent in prior approaches, resulting in realistic behavior that is robust to
trajectory perturbations. We extend Generative Adversarial Imitation Learning
to the training of recurrent policies, and we demonstrate that our model
outperforms rule-based controllers and maximum likelihood models in realistic
highway simulations. Our model both reproduces emergent behavior of human
drivers, such as lane change rate, while maintaining realistic control over
long time horizons.Comment: 8 pages, 6 figure
MSS-DepthNet: Depth Prediction with Multi-Step Spiking Neural Network
Event cameras are considered to have great potential for computer vision and
robotics applications because of their high temporal resolution and low power
consumption characteristics. However, the event stream output from event
cameras has asynchronous, sparse characteristics that existing computer vision
algorithms cannot handle. Spiking neural network is a novel event-based
computational paradigm that is considered to be well suited for processing
event camera tasks. However, direct training of deep SNNs suffers from
degradation problems. This work addresses these problems by proposing a spiking
neural network architecture with a novel residual block designed and
multi-dimension attention modules combined, focusing on the problem of depth
prediction. In addition, a novel event stream representation method is
explicitly proposed for SNNs. This model outperforms previous ANN networks of
the same size on the MVSEC dataset and shows great computational efficiency
A new integrated collision risk assessment methodology for autonomous vehicles
Real-time risk assessment of autonomous driving at tactical and operational levels is extremely challenging since both contextual and circumferential factors should concurrently be considered. Recent methods have started to simultaneously treat the context of the traffic environment along with vehicle dynamics. In particular, interaction-aware motion models that take inter-vehicle dependencies into account by utilizing the Bayesian interference are employed to mutually control multiple factors. However, communications between vehicles are often assumed and the developed models are required many parameters to be tuned. Consequently, they are computationally very demanding. Even in the cases where these desiderata are fulfilled, current approaches cannot cope with a large volume of sequential data from organically changing traffic scenarios, especially in highly complex operational environments such as dense urban areas with heterogeneous road users. To overcome these limitations, this paper develops a new risk assessment methodology that integrates a network-level collision estimate with a vehicle-based risk estimate in real-time under the joint framework of interaction-aware motion models and Dynamic Bayesian Networks (DBN). Following the formulation and explanation of the required functions, machine learning classifiers were utilized for the real-time network-level collision prediction and the results were then incorporated into the integrated DBN model for predicting collision probabilities in real-time. Results indicated an enhancement of the interaction-aware model by up to 10%, when traffic conditions are deemed as collision-prone. Hence, it was concluded that a well-calibrated collision prediction classifier provides a crucial hint for better risk perception by autonomous vehicles
Understanding Vehicular Traffic Behavior from Video: A Survey of Unsupervised Approaches
Recent emerging trends for automatic behavior analysis and understanding from infrastructure video are reviewed. Research has shifted from high-resolution estimation of vehicle state and instead, pushed machine learning approaches to extract meaningful patterns in aggregates in an unsupervised fashion. These patterns represent priors on observable motion, which can be utilized to describe a scene, answer behavior questions such as where is a vehicle going, how many vehicles are performing the same action, and to detect an abnormal event. The review focuses on two main methods for scene description, trajectory clustering and topic modeling. Example applications that utilize the behavioral modeling techniques are also presented. In addition, the most popular public datasets for behavioral analysis are presented. Discussion and comment on future directions in the field are also provide
Dynamic Switching State Systems for Visual Tracking
This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together
Dynamic Switching State Systems for Visual Tracking
This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together
- …