16,949 research outputs found

    Multiple Object Tracking in Urban Traffic Scenes with a Multiclass Object Detector

    Full text link
    Multiple object tracking (MOT) in urban traffic aims to produce the trajectories of the different road users that move across the field of view with different directions and speeds and that can have varying appearances and sizes. Occlusions and interactions among the different objects are expected and common due to the nature of urban road traffic. In this work, a tracking framework employing classification label information from a deep learning detection approach is used for associating the different objects, in addition to object position and appearances. We want to investigate the performance of a modern multiclass object detector for the MOT task in traffic scenes. Results show that the object labels improve tracking performance, but that the output of object detectors are not always reliable.Comment: 13th International Symposium on Visual Computing (ISVC

    Leveraging Traffic and Surveillance Video Cameras for Urban Traffic

    Get PDF
    The objective of this project was to investigate the use of existing video resources, such as traffic cameras, police cameras, red light cameras, and security cameras for the long-term, real-time collection of traffic statistics. An additional objective was to gather similar statistics for pedestrians and bicyclists. Throughout the course of the project, we investigated several methods for tracking vehicles under challenging conditions. The initial plan called for tracking based on optical flow. However, it was found that current optical flow–estimating algorithms are not well suited to low-quality video—hence, developing optical flow methods for low-quality video has been one aspect of this project. The method eventually used combines basic optical flow tracking with a learning detector for each tracked object—that is, the object is tracked both by its apparent movement and by its appearance should it temporarily disappear from or be obscured in the frame. We have produced a prototype software that allows the user to specify the vehicle trajectories of interest by drawing their shapes superimposed on a video frame. The software then tracks each vehicle as it travels through the frame, matches the vehicle’s movements to the most closely matching trajectory, and increases the vehicle count for that trajectory. In terms of pedestrian and bicycle counting, the system is capable of tracking these “objects” as well, though at present it is not capable of distinguishing between the three classes automatically. Continuing research by the principal investigator under a different grant will establish this capability as well.Illinois Department of Transportation, R27-131Ope

    Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems

    Full text link
    Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.Comment: To appear on ICRA 201
    corecore