89 research outputs found
Estimating adaptive cruise control model parameters from on-board radar units
Two new methods are presented for estimating car-following model parameters
using data collected from the Adaptive Cruise Control (ACC) enabled vehicles.
The vehicle is assumed to follow a constant time headway relative velocity
model in which the parameters are unknown and to be determined. The first
technique is a batch method that uses a least-squares approach to estimate the
parameters from time series data of the vehicle speed, space gap, and relative
velocity of a lead vehicle. The second method is an online approach that uses a
particle filter to simultaneously estimate both the state of the system and the
model parameters. Numerical experiments demonstrate the accuracy and
computational performance of the methods relative to a commonly used
simulation-based optimization approach. The methods are also assessed on
empirical data collected from a 2019 model year ACC vehicle driven in a highway
environment. Speed, space gap, and relative velocity data are recorded directly
from the factory-installed radar unit via the vehicle's CAN bus. All three
methods return similar mean absolute error values in speed and spacing compared
to the recorded data. The least-squares method has the fastest run-time
performance, and is up to 3 orders of magnitude faster than other methods. The
particle filter is faster than real-time, and therefore is suitable in
streaming applications in which the datasets can grow arbitrarily large.Comment: Accepted for poster presentation at the Transportation Research Board
2020 Annual Meeting, Washington D.
Polygon Intersection-over-Union Loss for Viewpoint-Agnostic Monocular 3D Vehicle Detection
Monocular 3D object detection is a challenging task because depth information
is difficult to obtain from 2D images. A subset of viewpoint-agnostic monocular
3D detection methods also do not explicitly leverage scene homography or
geometry during training, meaning that a model trained thusly can detect
objects in images from arbitrary viewpoints. Such works predict the projections
of the 3D bounding boxes on the image plane to estimate the location of the 3D
boxes, but these projections are not rectangular so the calculation of IoU
between these projected polygons is not straightforward. This work proposes an
efficient, fully differentiable algorithm for the calculation of IoU between
two convex polygons, which can be utilized to compute the IoU between two 3D
bounding box footprints viewed from an arbitrary angle. We test the performance
of the proposed polygon IoU loss (PIoU loss) on three state-of-the-art
viewpoint-agnostic 3D detection models. Experiments demonstrate that the
proposed PIoU loss converges faster than L1 loss and that in 3D detection
models, a combination of PIoU loss and L1 loss gives better results than L1
loss alone (+1.64% AP70 for MonoCon on cars, +0.18% AP70 for RTM3D on cars, and
+0.83%/+2.46% AP50/AP25 for MonoRCNN on cyclists)
The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera vehicle tracking
This work presents a novel video dataset recorded from overlapping highway
traffic cameras along an urban interstate, enabling multi-camera 3D object
tracking in a traffic monitoring context. Data is released from 3 scenes
containing video from at least 16 cameras each, totaling 57 minutes in length.
877,000 3D bounding boxes and corresponding object tracklets are fully and
accurately annotated for each camera field of view and are combined into a
spatially and temporally continuous set of vehicle trajectories for each scene.
Lastly, existing algorithms are combined to benchmark a number of 3D
multi-camera tracking pipelines on the dataset, with results indicating that
the dataset is challenging due to the difficulty of matching objects traveling
at high speeds across cameras and heavy object occlusion, potentially for
hundreds of frames, during congested traffic. This work aims to enable the
development of accurate and automatic vehicle trajectory extraction algorithms,
which will play a vital role in understanding impacts of autonomous vehicle
technologies on the safety and efficiency of traffic
Virtual trajectories for I-24 MOTION: data and tools
This article introduces a new virtual trajectory dataset derived from the
I-24 MOTION INCEPTION v1.0.0 dataset to address challenges in analyzing large
but noisy trajectory datasets. Building on the concept of virtual trajectories,
we provide a Python implementation to generate virtual trajectories from large
raw datasets that are typically challenging to process due to their size. We
demonstrate the practical utility of these trajectories in assessing speed
variability and travel times across different lanes within the INCEPTION
dataset. The virtual trajectory dataset opens future research on traffic waves
and their impact on energy
Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments
Traffic waves are phenomena that emerge when the vehicular density exceeds a
critical threshold. Considering the presence of increasingly automated vehicles
in the traffic stream, a number of research activities have focused on the
influence of automated vehicles on the bulk traffic flow. In the present
article, we demonstrate experimentally that intelligent control of an
autonomous vehicle is able to dampen stop-and-go waves that can arise even in
the absence of geometric or lane changing triggers. Precisely, our experiments
on a circular track with more than 20 vehicles show that traffic waves emerge
consistently, and that they can be dampened by controlling the velocity of a
single vehicle in the flow. We compare metrics for velocity, braking events,
and fuel economy across experiments. These experimental findings suggest a
paradigm shift in traffic management: flow control will be possible via a few
mobile actuators (less than 5%) long before a majority of vehicles have
autonomous capabilities
So you think you can track?
This work introduces a multi-camera tracking dataset consisting of 234 hours
of video data recorded concurrently from 234 overlapping HD cameras covering a
4.2 mile stretch of 8-10 lane interstate highway near Nashville, TN. The video
is recorded during a period of high traffic density with 500+ objects typically
visible within the scene and typical object longevities of 3-15 minutes. GPS
trajectories from 270 vehicle passes through the scene are manually corrected
in the video data to provide a set of ground-truth trajectories for
recall-oriented tracking metrics, and object detections are provided for each
camera in the scene (159 million total before cross-camera fusion). Initial
benchmarking of tracking-by-detection algorithms is performed against the GPS
trajectories, and a best HOTA of only 9.5% is obtained (best recall 75.9% at
IOU 0.1, 47.9 average IDs per ground truth object), indicating the benchmarked
trackers do not perform sufficiently well at the long temporal and spatial
durations required for traffic scene understanding
- …