1,063 research outputs found
A Tempt to Unify Heterogeneous Driving Databases using Traffic Primitives
A multitude of publicly-available driving datasets and data platforms have
been raised for autonomous vehicles (AV). However, the heterogeneities of
databases in size, structure and driving context make existing datasets
practically ineffective due to a lack of uniform frameworks and searchable
indexes. In order to overcome these limitations on existing public datasets,
this paper proposes a data unification framework based on traffic primitives
with ability to automatically unify and label heterogeneous traffic data. This
is achieved by two steps: 1) Carefully arrange raw multidimensional time series
driving data into a relational database and then 2) automatically extract
labeled and indexed traffic primitives from traffic data through a Bayesian
nonparametric learning method. Finally, we evaluate the effectiveness of our
developed framework using the collected real vehicle data.Comment: 6 pages, 7 figures, 1 table, ITSC 201
A General Framework of Learning Multi-Vehicle Interaction Patterns from Videos
Semantic learning and understanding of multi-vehicle interaction patterns in
a cluttered driving environment are essential but challenging for autonomous
vehicles to make proper decisions. This paper presents a general framework to
gain insights into intricate multi-vehicle interaction patterns from bird's-eye
view traffic videos. We adopt a Gaussian velocity field to describe the
time-varying multi-vehicle interaction behaviors and then use deep autoencoders
to learn associated latent representations for each temporal frame. Then, we
utilize a hidden semi-Markov model with a hierarchical Dirichlet process as a
prior to segment these sequential representations into granular components,
also called traffic primitives, corresponding to interaction patterns.
Experimental results demonstrate that our proposed framework can extract
traffic primitives from videos, thus providing a semantic way to analyze
multi-vehicle interaction patterns, even for cluttered driving scenarios that
are far messier than human beings can cope with.Comment: 2019 IEEE Intelligent Transportation Systems Conference (ITSC
Spatiotemporal Learning of Multivehicle Interaction Patterns in Lane-Change Scenarios
Interpretation of common-yet-challenging interaction scenarios can benefit
well-founded decisions for autonomous vehicles. Previous research achieved this
using their prior knowledge of specific scenarios with predefined models,
limiting their adaptive capabilities. This paper describes a Bayesian
nonparametric approach that leverages continuous (i.e., Gaussian processes) and
discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying
interaction patterns of the ego vehicle with other nearby vehicles. Our model
relaxes dependency on the number of surrounding vehicles by developing an
acceleration-sensitive velocity field based on Gaussian processes. The
experiment results demonstrate that the velocity field can represent the
spatial interactions between the ego vehicle and its surroundings. Then, a
discrete Bayesian nonparametric model, integrating Dirichlet processes and
hidden Markov models, is developed to learn the interaction patterns over the
temporal space by segmenting and clustering the sequential interaction data
into interpretable granular patterns automatically. We then evaluate our
approach in the highway lane-change scenarios using the highD dataset collected
from real-world settings. Results demonstrate that our proposed Bayesian
nonparametric approach provides an insight into the complicated lane-change
interactions of the ego vehicle with multiple surrounding traffic participants
based on the interpretable interaction patterns and their transition properties
in temporal relationships. Our proposed approach sheds light on efficiently
analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian
interactions. View the demos via https://youtu.be/z_vf9UHtdAM.Comment: for the supplements, see
https://chengyuan-zhang.github.io/Multivehicle-Interaction
Learning to Segment and Represent Motion Primitives from Driving Data for Motion Planning Applications
Developing an intelligent vehicle which can perform human-like actions
requires the ability to learn basic driving skills from a large amount of
naturalistic driving data. The algorithms will become efficient if we could
decompose the complex driving tasks into motion primitives which represent the
elementary compositions of driving skills. Therefore, the purpose of this paper
is to segment unlabeled trajectory data into a library of motion primitives. By
applying a probabilistic inference based on an iterative
Expectation-Maximization algorithm, our method segments the collected
trajectories while learning a set of motion primitives represented by the
dynamic movement primitives. The proposed method utilizes the mutual
dependencies between the segmentation and representation of motion primitives
and the driving-specific based initial segmentation. By utilizing this mutual
dependency and the initial condition, this paper presents how we can enhance
the performance of both the segmentation and the motion primitive library
establishment. We also evaluate the applicability of the primitive
representation method to imitation learning and motion planning algorithms. The
model is trained and validated by using the driving data collected from the
Beijing Institute of Technology intelligent vehicle platform. The results show
that the proposed approach can find the proper segmentation and establish the
motion primitive library simultaneously
- …