29,010 research outputs found

    A revised video vision transformer for traffic estimation with fleet trajectories

    Get PDF
    Real-time traffic monitoring represents a key component for transportation management. The increasing penetration rate of connected vehicles with positioning devices encourages the utilization of trajectory data for real-time traffic monitoring. The use of commercial fleet trajectory data could be seen as the first step towards mobile sensing networks. The main objective of this research is to estimate space occupancy of a single road segment with partially observed trajectories (commercial fleet trajectories in our case). We first formulate the trajectory-based traffic estimation as a video computing problem. Then, we reconstruct trajectory series into video-like data by performing spatial discretization. Following this, video input is embedded using a tubelet embedding strategy. Finally, a Revised Video Vision Transformer (RViViT) is proposed to estimate traffic state from video embeddings. The proposed RViViT is tested on a public dataset of naturalistic vehicle trajectories collected from German highways around Cologne during 2017 and 2018. The results witness the effectiveness of the proposed method in traffic estimation with partially observed trajectories

    Vision-Based Lane-Changing Behavior Detection Using Deep Residual Neural Network

    Get PDF
    Accurate lane localization and lane change detection are crucial in advanced driver assistance systems and autonomous driving systems for safer and more efficient trajectory planning. Conventional localization devices such as Global Positioning System only provide road-level resolution for car navigation, which is incompetent to assist in lane-level decision making. The state of art technique for lane localization is to use Light Detection and Ranging sensors to correct the global localization error and achieve centimeter-level accuracy, but the real-time implementation and popularization for LiDAR is still limited by its computational burden and current cost. As a cost-effective alternative, vision-based lane change detection has been highly regarded for affordable autonomous vehicles to support lane-level localization. A deep learning-based computer vision system is developed to detect the lane change behavior using the images captured by a front-view camera mounted on the vehicle and data from the inertial measurement unit for highway driving. Testing results on real-world driving data have shown that the proposed method is robust with real-time working ability and could achieve around 87% lane change detection accuracy. Compared to the average human reaction to visual stimuli, the proposed computer vision system works 9 times faster, which makes it capable of helping make life-saving decisions in time
    • …
    corecore