1,272 research outputs found

    Floating car data augmentation based on infrastructure sensors and neural networks

    Get PDF
    The development of new-generation intelligent vehicle technologies will lead to a better level of road safety and CO2 emission reductions. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) infrastructure sensors and 2) floating vehicles. The former consists of a set of fixed point detectors installed in the roads, and the latter consists of the use of mobile probe vehicles as mobile sensors. However, both systems still have some deficiencies. The infrastructure sensors retrieve information fromstatic points of the road, which are spaced, in some cases, kilometers apart. This means that the picture of the actual traffic situation is not a real one. This deficiency is corrected by floating cars, which retrieve dynamic information on the traffic situation. Unfortunately, the number of floating data vehicles currently available is too small and insufficient to give a complete picture of the road traffic. In this paper, we present a floating car data (FCD) augmentation system that combines information fromfloating data vehicles and infrastructure sensors, and that, by using neural networks, is capable of incrementing the amount of FCD with virtual information. This system has been implemented and tested on actual roads, and the results show little difference between the data supplied by the floating vehicles and the virtual vehicles

    Application of 2D Homography for High Resolution Traffic Data Collection using CCTV Cameras

    Full text link
    Traffic cameras remain the primary source data for surveillance activities such as congestion and incident monitoring. To date, State agencies continue to rely on manual effort to extract data from networked cameras due to limitations of the current automatic vision systems including requirements for complex camera calibration and inability to generate high resolution data. This study implements a three-stage video analytics framework for extracting high-resolution traffic data such vehicle counts, speed, and acceleration from infrastructure-mounted CCTV cameras. The key components of the framework include object recognition, perspective transformation, and vehicle trajectory reconstruction for traffic data collection. First, a state-of-the-art vehicle recognition model is implemented to detect and classify vehicles. Next, to correct for camera distortion and reduce partial occlusion, an algorithm inspired by two-point linear perspective is utilized to extracts the region of interest (ROI) automatically, while a 2D homography technique transforms the CCTV view to bird's-eye view (BEV). Cameras are calibrated with a two-layer matrix system to enable the extraction of speed and acceleration by converting image coordinates to real-world measurements. Individual vehicle trajectories are constructed and compared in BEV using two time-space-feature-based object trackers, namely Motpy and BYTETrack. The results of the current study showed about +/- 4.5% error rate for directional traffic counts, less than 10% MSE for speed bias between camera estimates in comparison to estimates from probe data sources. Extracting high-resolution data from traffic cameras has several implications, ranging from improvements in traffic management and identify dangerous driving behavior, high-risk areas for accidents, and other safety concerns, enabling proactive measures to reduce accidents and fatalities.Comment: 25 pages, 9 figures, this paper was submitted for consideration for presentation at the 102nd Annual Meeting of the Transportation Research Board, January 202

    Sense, Predict, Adapt, Repeat: A Blueprint for Design of New Adaptive AI-Centric Sensing Systems

    Full text link
    As Moore's Law loses momentum, improving size, performance, and efficiency of processors has become increasingly challenging, ending the era of predictable improvements in hardware performance. Meanwhile, the widespread incorporation of high-definition sensors in consumer devices and autonomous technologies has fueled a significant upsurge in sensory data. Current global trends reveal that the volume of generated data already exceeds human consumption capacity, making AI algorithms the primary consumers of data worldwide. To address this, a novel approach to designing AI-centric sensing systems is needed that can bridge the gap between the increasing capabilities of high-definition sensors and the limitations of AI processors. This paper provides an overview of efficient sensing and perception methods in both AI and sensing domains, emphasizing the necessity of co-designing AI algorithms and sensing systems for dynamic perception. The proposed approach involves a framework for designing and analyzing dynamic AI-in-the-loop sensing systems, suggesting a fundamentally new method for designing adaptive sensing systems through inference-time AI-to-sensor feedback and end-to-end efficiency and performance optimization

    Comparing the Performance of Deep Learning Algorithms for Vehicle Detection and Classification

    Get PDF
    The rapid pace of developments in Artificial Intelligence (AI) provides unprecedented opportunities to enhance the performance of Intelligent Transportation Systems. Automating vehicle detection and classification using computer vision methods can complement traditional sensors or serve as a cost-effective and environmentally friendly substitute for conventional sensors. This study investigates the robustness of existing deep learning models for vehicle identification and classification using a heterogenous dataset. The dataset is grouped into six distinct classes based on the Federal Highway Administration (FHWA) vehicle classification scheme. This study uses three different versions of You Only Look Once (YOLO) single-stage object detection models, namely YOLOv7, YOLOv5m, and YOLOv5s. The comparative evaluation will depend on four performance metrics: recall, precision, F1-score and mean average precision (MAP). The results show that for this case study, YOLOv7 outperformed the other models with 84.7% precision, 89.4% recall, 86.1% F1-score and 93% MAP at 0.5, and 82.4% MAP at 0.95
    • …
    corecore