494 research outputs found
Improving Autonomous Vehicle Mapping and Navigation in Work Zones Using Crowdsourcing Vehicle Trajectories
Prevalent solutions for Connected and Autonomous vehicle (CAV) mapping
include high definition map (HD map) or real-time Simultaneous Localization and
Mapping (SLAM). Both methods only rely on vehicle itself (onboard sensors or
embedded maps) and can not adapt well to temporarily changed drivable areas
such as work zones. Navigating CAVs in such areas heavily relies on how the
vehicle defines drivable areas based on perception information. Difficulties in
improving perception accuracy and ensuring the correct interpretation of
perception results are challenging to the vehicle in these situations. This
paper presents a prototype that introduces crowdsourcing trajectories
information into the mapping process to enhance CAV's understanding on the
drivable area and traffic rules. A Gaussian Mixture Model (GMM) is applied to
construct the temporarily changed drivable area and occupancy grid map (OGM)
based on crowdsourcing trajectories. The proposed method is compared with SLAM
without any human driving information. Our method has adapted well with the
downstream path planning and vehicle control module, and the CAV did not
violate driving rule, which a pure SLAM method did not achieve.Comment: Presented at TRBAM. Journal version in progres
Freeway travel time estimation based on the general motors model: a genetic algorithm calibration framework
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/166267/1/itr2bf00710.pd
Estimating Freeway Travel Times using the General Motors Model
Travel time is a key transportation performance measure because of its diverse applications. Various modeling approaches to estimating freeway travel time have been well developed due to widespread installation of intelligent transportation system sensors. However, estimating accurate travel time using existing freeway travel time models is still challenging under congested conditions. Therefore, this study aimed to develop an innovative freeway travel time estimation model based on the General Motors (GM) car-following model. Since the GM model is usually used in a microsimulation environment, the concepts of virtual leading and virtual following vehicles are proposed to allow the GM model to be used in macroscale environments using aggregated traffic sensor data. Travel time data collected from three study corridors on I-270 in Saint Louis, Missouri, were used to verify the estimated travel times produced by the proposed General Motors travel time estimation (GMTTE) model and two existing models, the instantaneous model and the time-slice model. The results showed that the GMTTE model out-performed the two existing models due to lower mean average percentage errors of 1.62% in free-flow conditions and 6.66% in two congested conditions. Overall, the GMTTE model demonstrated its robustness and accuracy for estimating freeway travel times
Development of an Augmented Reality Environment for Connected and Automated Vehicle Testing
Technical ReportCurrently closed Connected and Automated (CAV) testing facilities, such as Mcity, merely provide empty roadways, in which test CAVs can only interact with each other and the infrastructure (e.g. traffic signals). However, a complete testing environment should also include background traffic to interact with the test CAVs. Involving real vehicles as background traffic is not only costly, but also difficult to coordinate and control. To address the limitation, in this project we develop an augmented reality testing environment in which background traffic is generated in microscopic simulation and provided to test CAVs to augment the functionality of the test facility. The augmented reality combines the real-world testing facility and a simulation platform, in which movements of test CAVs and traffic signals in the real-world can be synchronized in simulation, while simulated traffic information can be provided to test CAVs’ communication system. Test CAVs “think” they are surrounded by other vehicles and adjust behaviors accordingly. This technology provides a realistic traffic environment to the CAVs, so that test scenarios which require interactions with other vehicles or pedestrians can be performed. Compared to using real vehicles, simulated vehicles can be easily controlled and manipulated in generating different scenarios with much less cost in a safe environment.https://deepblue.lib.umich.edu/bitstream/2027.42/149453/3/CCAT Project 2 Final Report_rvsd_2019_06_05.pdfDescription of CCAT Project 2 Final Report_rvsd_2019_06_05.pdf : Technical ReportDescription of CCAT Project 2 Final Report_rvsd_2019_06_05.pdf : Technical Repor
A Cooperative Perception Environment for Traffic Operations and Control
Existing data collection methods for traffic operations and control usually
rely on infrastructure-based loop detectors or probe vehicle trajectories.
Connected and automated vehicles (CAVs) not only can report data about
themselves but also can provide the status of all detected surrounding
vehicles. Integration of perception data from multiple CAVs as well as
infrastructure sensors (e.g., LiDAR) can provide richer information even under
a very low penetration rate. This paper aims to develop a cooperative data
collection system, which integrates Lidar point cloud data from both
infrastructure and CAVs to create a cooperative perception environment for
various transportation applications. The state-of-the-art 3D detection models
are applied to detect vehicles in the merged point cloud. We test the proposed
cooperative perception environment with the max pressure adaptive signal
control model in a co-simulation platform with CARLA and SUMO. Results show
that very low penetration rates of CAV plus an infrastructure sensor are
sufficient to achieve comparable performance with 30% or higher penetration
rates of connected vehicles (CV). We also show the equivalent CV penetration
rate (E-CVPR) under different CAV penetration rates to demonstrate the data
collection efficiency of the cooperative perception environment
- …