8,816 research outputs found

    Visual Counting of Traffic Flow from a Car via Vehicle Detection and Motion Analysis

    Get PDF
    Visual traffic counting so far has been carried out by static cameras at streets or aerial pictures from sky. This work initiates a new approach to count traffic flow by using populated vehicle driving recorders. Mainly vehicles are counted by a camera moves along a route on opposite lane. Vehicle detection is first implemented in video frames by using deep learning YOLO3, and then vehicle trajectories are counted in the spatial-temporal space called motion profile. Motion continuity, direction, and detection missing are considered to avoid multiple counting of oncoming vehicles. This method has been tested on naturalistic driving videos lasting for hours. The counted vehicle numbers can be interpolated as a flow of opposite lanes from a patrol vehicle for traffic control. The mobile counting of traffic is more flexible than the traffic monitoring by cameras at street corners

    Understanding Traffic Density from Large-Scale Web Camera Data

    Full text link
    Understanding traffic density from large-scale web camera (webcam) videos is a challenging problem because such videos have low spatial and temporal resolution, high occlusion and large perspective. To deeply understand traffic density, we explore both deep learning based and optimization based methods. To avoid individual vehicle detection and tracking, both methods map the image into vehicle density map, one based on rank constrained regression and the other one based on fully convolution networks (FCN). The regression based method learns different weights for different blocks in the image to increase freedom degrees of weights and embed perspective information. The FCN based method jointly estimates vehicle density map and vehicle count with a residual learning framework to perform end-to-end dense prediction, allowing arbitrary image resolution, and adapting to different vehicle scales and perspectives. We analyze and compare both methods, and get insights from optimization based method to improve deep model. Since existing datasets do not cover all the challenges in our work, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Both methods are extensively evaluated and compared on different counting tasks and datasets. FCN based method significantly reduces the mean absolute error from 10.99 to 5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.Comment: Accepted by CVPR 2017. Preprint version was uploaded on http://welcome.isr.tecnico.ulisboa.pt/publications/understanding-traffic-density-from-large-scale-web-camera-data
    • …
    corecore