42,155 research outputs found

    A Cooperative Perception Environment for Traffic Operations and Control

    Full text link
    Existing data collection methods for traffic operations and control usually rely on infrastructure-based loop detectors or probe vehicle trajectories. Connected and automated vehicles (CAVs) not only can report data about themselves but also can provide the status of all detected surrounding vehicles. Integration of perception data from multiple CAVs as well as infrastructure sensors (e.g., LiDAR) can provide richer information even under a very low penetration rate. This paper aims to develop a cooperative data collection system, which integrates Lidar point cloud data from both infrastructure and CAVs to create a cooperative perception environment for various transportation applications. The state-of-the-art 3D detection models are applied to detect vehicles in the merged point cloud. We test the proposed cooperative perception environment with the max pressure adaptive signal control model in a co-simulation platform with CARLA and SUMO. Results show that very low penetration rates of CAV plus an infrastructure sensor are sufficient to achieve comparable performance with 30% or higher penetration rates of connected vehicles (CV). We also show the equivalent CV penetration rate (E-CVPR) under different CAV penetration rates to demonstrate the data collection efficiency of the cooperative perception environment

    Sensor Fusion for Object Detection and Tracking in Autonomous Vehicles

    Get PDF
    Autonomous driving vehicles depend on their perception system to understand the environment and identify all static and dynamic obstacles surrounding the vehicle. The perception system in an autonomous vehicle uses the sensory data obtained from different sensor modalities to understand the environment and perform a variety of tasks such as object detection and object tracking. Combining the outputs of different sensors to obtain a more reliable and robust outcome is called sensor fusion. This dissertation studies the problem of sensor fusion for object detection and object tracking in autonomous driving vehicles and explores different approaches for utilizing deep neural networks to accurately and efficiently fuse sensory data from different sensing modalities. In particular, this dissertation focuses on fusing radar and camera data for 2D and 3D object detection and object tracking tasks. First, the effectiveness of radar and camera fusion for 2D object detection is investigated by introducing a radar region proposal algorithm for generating object proposals in a two-stage object detection network. The evaluation results show significant improvement in speed and accuracy compared to a vision-based proposal generation method. Next, radar and camera fusion is used for the task of joint object detection and depth estimation where the radar data is used in conjunction with image features to generate object proposals, but also provides accurate depth estimation for the detected objects in the scene. A fusion algorithm is also proposed for 3D object detection where where the depth and velocity data obtained from the radar is fused with the camera images to detect objects in 3D and also accurately estimate their velocities without requiring any temporal information. Finally, radar and camera sensor fusion is used for 3D multi-object tracking by introducing an end-to-end trainable and online network capable of tracking objects in real-time

    Cooperative Perception for Social Driving in Connected Vehicle Traffic

    Get PDF
    The development of autonomous vehicle technology has moved to the center of automotive research in recent decades. In the foreseeable future, road vehicles at all levels of automation and connectivity will be required to operate safely in a hybrid traffic where human operated vehicles (HOVs) and fully and semi-autonomous vehicles (AVs) coexist. Having an accurate and reliable perception of the road is an important requirement for achieving this objective. This dissertation addresses some of the associated challenges via developing a human-like social driver model and devising a decentralized cooperative perception framework. A human-like driver model can aid the development of AVs by building an understanding of interactions among human drivers and AVs in a hybrid traffic, therefore facilitating an efficient and safe integration. The presented social driver model categorizes and defines the driver\u27s psychological decision factors in mathematical representations (target force, object force, and lane force). A model predictive control (MPC) is then employed for the motion planning by evaluating the prevailing social forces and considering the kinematics of the controlled vehicle as well as other operating constraints to ensure a safe maneuver in a way that mimics the predictive nature of the human driver\u27s decision making process. A hierarchical model predictive control structure is also proposed, where an additional upper level controller aggregates the social forces over a longer prediction horizon upon the availability of an extended perception of the upcoming traffic via vehicular networking. Based on the prediction of the upper level controller, a sequence of reference lanes is passed to a lower level controller to track while avoiding local obstacles. This hierarchical scheme helps reduce unnecessary lane changes resulting in smoother maneuvers. The dynamic vehicular communication environment requires a robust framework that must consistently evaluate and exploit the set of communicated information for the purpose of improving the perception of a participating vehicle beyond the limitations. This dissertation presents a decentralized cooperative perception framework that considers uncertainties in traffic measurements and allows scalability (for various settings of traffic density, participation rate, etc.). The framework utilizes a Bhattacharyya distance filter (BDF) for data association and a fast covariance intersection fusion scheme (FCI) for the data fusion processes. The conservatism of the covariance intersection fusion scheme is investigated in comparison to the traditional Kalman filter (KF), and two different fusion architectures: sensor-to-sensor and sensor-to-system track fusion are evaluated. The performance of the overall proposed framework is demonstrated via Monte Carlo simulations with a set of empirical communications models and traffic microsimulations where each connected vehicle asynchronously broadcasts its local perception consisting of estimates of the motion states of self and neighboring vehicles along with the corresponding uncertainty measures of the estimates. The evaluated framework includes a vehicle-to-vehicle (V2V) communication model that considers intermittent communications as well as a model that takes into account dynamic changes in an individual vehicle’s sensors’ FoV in accordance with the prevailing traffic conditions. The results show the presence of optimality in participation rate, where increasing participation rate beyond a certain level adversely affects the delay in packet delivery and the computational complexity in data association and fusion processes increase without a significant improvement in the achieved accuracy via the cooperative perception. In a highly dense traffic environment, the vehicular network can often be congested leading to limited bandwidth availability at high participation rates of the connected vehicles in the cooperative perception scheme. To alleviate the bandwidth utilization issues, an information-value discriminating networking scheme is proposed, where each sender broadcasts selectively chosen perception data based on the novelty-value of information. The potential benefits of these approaches include, but are not limited to, the reduction of bandwidth bottle-necking and the minimization of the computational cost of data association and fusion post processing of the shared perception data at receiving nodes. It is argued that the proposed information-value discriminating communication scheme can alleviate these adverse effects without sacrificing the fidelity of the perception

    Limited Visibility and Uncertainty Aware Motion Planning for Automated Driving

    Full text link
    Adverse weather conditions and occlusions in urban environments result in impaired perception. The uncertainties are handled in different modules of an automated vehicle, ranging from sensor level over situation prediction until motion planning. This paper focuses on motion planning given an uncertain environment model with occlusions. We present a method to remain collision free for the worst-case evolution of the given scene. We define criteria that measure the available margins to a collision while considering visibility and interactions, and consequently integrate conditions that apply these criteria into an optimization-based motion planner. We show the generality of our method by validating it in several distinct urban scenarios
    • …
    corecore