14,951 research outputs found

    Traffic Danger Recognition With Surveillance Cameras Without Training Data

    Full text link
    We propose a traffic danger recognition model that works with arbitrary traffic surveillance cameras to identify and predict car crashes. There are too many cameras to monitor manually. Therefore, we developed a model to predict and identify car crashes from surveillance cameras based on a 3D reconstruction of the road plane and prediction of trajectories. For normal traffic, it supports real-time proactive safety checks of speeds and distances between vehicles to provide insights about possible high-risk areas. We achieve good prediction and recognition of car crashes without using any labeled training data of crashes. Experiments on the BrnoCompSpeed dataset show that our model can accurately monitor the road, with mean errors of 1.80% for distance measurement, 2.77 km/h for speed measurement, 0.24 m for car position prediction, and 2.53 km/h for speed prediction.Comment: To be published in proceedings of Advanced Video and Signal-based Surveillance (AVSS), 2018 15th IEEE International Conference on, pp. 378-383, IEE

    Automatic Vehicle Detection, Tracking and Recognition of License Plate in Real Time Videos

    Get PDF
    Automatic video analysis from traffic surveillance cameras is a fast-emerging field based on computer vision techniques. It is a key technology to public safety, intelligent transport system (ITS) and for efficient management of traffic. In recent years, there has been an increased scope for automatic analysis of traffic activity. We define video analytics as computer-vision-based surveillance algorithms and systems to extract contextual information from video. In traffic scenarios several monitoring objectives can be supported by the application of computer vision and pattern recognition techniques, including the detection of traffic violations (e.g., illegal turns and one-way streets) and the identification of road users (e.g., vehicles, motorbikes, and pedestrians). Currently most reliable approach is through the recognition of number plates, i.e., automatic number plate recognition (ANPR), which is also known as automatic license plate recognition (ALPR), or radio frequency transponders. Here full-featured automatic system for vehicle detection, tracking and license plate recognition is presented. This system has many applications in pattern recognition and machine vision and they ranges from complex security systems to common areas and from parking admission to urban traffic control. This system has complex characteristics due to diverse effects as fog, rain, shadows, uneven illumination conditions, occlusion, variable distances, velocity of car, scene's angle in frame, rotation of plate, number of vehicles in the scene and others. The main objective of this work is to show a system that solves the practical problem of car identification for real scenes. All steps of the process, from video acquisition to optical character recognition are considered to achieve an automatic identification of plates

    SINet: A Scale-insensitive Convolutional Neural Network for Fast Vehicle Detection

    Full text link
    Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.Comment: Accepted by IEEE Transactions on Intelligent Transportation Systems (T-ITS

    FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras

    Full text link
    In this paper, we develop deep spatio-temporal neural networks to sequentially count vehicles from low quality videos captured by city cameras (citycams). Citycam videos have low resolution, low frame rate, high occlusion and large perspective, making most existing methods lose their efficacy. To overcome limitations of existing methods and incorporate the temporal information of traffic video, we design a novel FCN-rLSTM network to jointly estimate vehicle density and vehicle count by connecting fully convolutional neural networks (FCN) with long short term memory networks (LSTM) in a residual learning fashion. Such design leverages the strengths of FCN for pixel-level prediction and the strengths of LSTM for learning complex temporal dynamics. The residual learning connection reformulates the vehicle count regression as learning residual functions with reference to the sum of densities in each frame, which significantly accelerates the training of networks. To preserve feature map resolution, we propose a Hyper-Atrous combination to integrate atrous convolution in FCN and combine feature maps of different convolution layers. FCN-rLSTM enables refined feature representation and a novel end-to-end trainable mapping from pixels to vehicle count. We extensively evaluated the proposed method on different counting tasks with three datasets, with experimental results demonstrating their effectiveness and robustness. In particular, FCN-rLSTM reduces the mean absolute error (MAE) from 5.31 to 4.21 on TRANCOS, and reduces the MAE from 2.74 to 1.53 on WebCamT. Training process is accelerated by 5 times on average.Comment: Accepted by International Conference on Computer Vision (ICCV), 201
    corecore