822 research outputs found

    Traffic Danger Recognition With Surveillance Cameras Without Training Data

    Full text link
    We propose a traffic danger recognition model that works with arbitrary traffic surveillance cameras to identify and predict car crashes. There are too many cameras to monitor manually. Therefore, we developed a model to predict and identify car crashes from surveillance cameras based on a 3D reconstruction of the road plane and prediction of trajectories. For normal traffic, it supports real-time proactive safety checks of speeds and distances between vehicles to provide insights about possible high-risk areas. We achieve good prediction and recognition of car crashes without using any labeled training data of crashes. Experiments on the BrnoCompSpeed dataset show that our model can accurately monitor the road, with mean errors of 1.80% for distance measurement, 2.77 km/h for speed measurement, 0.24 m for car position prediction, and 2.53 km/h for speed prediction.Comment: To be published in proceedings of Advanced Video and Signal-based Surveillance (AVSS), 2018 15th IEEE International Conference on, pp. 378-383, IEE

    Monocular visual traffic surveillance: a review

    Get PDF
    To facilitate the monitoring and management of modern transportation systems, monocular visual traffic surveillance systems have been widely adopted for speed measurement, accident detection, and accident prediction. Thanks to the recent innovations in computer vision and deep learning research, the performance of visual traffic surveillance systems has been significantly improved. However, despite this success, there is a lack of survey papers that systematically review these new methods. Therefore, we conduct a systematic review of relevant studies to fill this gap and provide guidance to future studies. This paper is structured along the visual information processing pipeline that includes object detection, object tracking, and camera calibration. Moreover, we also include important applications of visual traffic surveillance systems, such as speed measurement, behavior learning, accident detection and prediction. Finally, future research directions of visual traffic surveillance systems are outlined

    Automated Automotive Radar Calibration With Intelligent Vehicles

    Full text link
    While automotive radar sensors are widely adopted and have been used for automatic cruise control and collision avoidance tasks, their application outside of vehicles is still limited. As they have the ability to resolve multiple targets in 3D space, radars can also be used for improving environment perception. This application, however, requires a precise calibration, which is usually a time-consuming and labor-intensive task. We, therefore, present an approach for automated and geo-referenced extrinsic calibration of automotive radar sensors that is based on a novel hypothesis filtering scheme. Our method does not require external modifications of a vehicle and instead uses the location data obtained from automated vehicles. This location data is then combined with filtered sensor data to create calibration hypotheses. Subsequent filtering and optimization recovers the correct calibration. Our evaluation on data from a real testing site shows that our method can correctly calibrate infrastructure sensors in an automated manner, thus enabling cooperative driving scenarios.Comment: 5 pages, 4 figures, accepted for presentation at the 31st European Signal Processing Conference (EUSIPCO), September 4 - September 8, 2023, Helsinki, Finlan

    Wide area detection system: Conceptual design study

    Get PDF
    An integrated sensor for traffic surveillance on mainline sections of urban freeways is described. Applicable imaging and processor technology is surveyed and the functional requirements for the sensors and the conceptual design of the breadboard sensors are given. Parameters measured by the sensors include lane density, speed, and volume. The freeway image is also used for incident diagnosis

    Vision-based traffic surveys in urban environments

    Get PDF
    This paper presents a state-of-the-art, vision-based vehicle detection and type classification to perform traffic surveys from a roadside closed-circuit television camera. Vehicles are detected using background subtraction based on a Gaussian mixture model that can cope with vehicles that become stationary over a significant period of time. Vehicle silhouettes are described using a combination of shape and appearance features using an intensity-based pyramid histogram of orientation gradients (HOG). Classification is performed using a support vector machine, which is trained on a small set of hand-labeled silhouette exemplars. These exemplars are identified using a model-based preclassifier that utilizes calibrated images mapped by Google Earth to provide accurately surveyed scene geometry matched to visible image landmarks. Kalman filters track the vehicles to enable classification by majority voting over several consecutive frames. The system counts vehicles and separates them into four categories: car, van, bus, and motorcycle (including bicycles). Experiments with real-world data have been undertaken to evaluate system performance and vehicle detection rates of 96.45% and classification accuracy of 95.70% have been achieved on this data.The authors gratefully acknowledge the Royal Borough of Kingston for providing the video data. S.A. Velastin is grateful to funding received from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement nº 600371, el Ministerio de Economía y Competitividad (COFUND2013-51509) and Banco Santander

    Camera-Based System for the Automatic Detection of Vehicle Axle Count and Speed Using Convolutional Neural Networks

    Get PDF
    This paper outlines the development of a nonintrusive alternative to current intelligent transportation systems using road-side video cameras. The use of video to determine the axle count and speed of vehicles traveling on major roads was investigated. Two instances of a convolutional neural network, YOLOv3, were trained to perform object detection for the purposes of axle detection and speed measurement, achieving accuracies of 95% and 98% mAP respectively. Outputs from the axle detection were processed to produce axle counts for each vehicle with 93% accuracy across all vehicles where all axles are visible. A simple Kalman filter was used to track the vehicles across the video frame, which worked well but struggled with longer periods of occlusion. The camera was calibrated for speed measurement using road markings in place of a reference object. The calibration method proved to be accurate, however, a constant error was introduced if the road markings were not consistent with the government specifications. The average vehicle speeds calculated were within the expected range. Both models achieved real-time speed performance

    Portable Multi-Sensor System for Intersection Safety Performance Assessment, July 2018

    Get PDF
    State departments of transportation (DOTs) and city municipal agencies install a large number of roadside cameras on freeways and arterials for surveillance tasks. It is estimated that there will be approximately a billion cameras worldwide by 2020. However, most of these cameras are used for manual surveillance purposes only. The main objective of this study was to investigate the use of these cameras as a sensor for traffic state estimation. The scope of this project involved detecting vehicles, tracking them, and estimating their speeds. The research team adopted a tracking-by-detection framework for this study. The object detection task was performed using you only look once version 3 (YOLOv3) model architecture and the tracking was performed using the simple online and realtime tracking (SORT) algorithm. The team tested the framework on videos collected from three intersections in Ames, Iowa. The combined detection and tracking was performed at approximately 40 frames per second (fps) using GeForce GTX 1080 GPU, enabling it to be implemented online easily. Camera calibration was performed by finding the edges of moving vehicles to automatically detect the vanishing points, and the scale factor was determined manually from a known fixed distance in the image and the real world. Although this methodology performed vanishing point determination automatically without any manual intervention, the speed estimation error came out to be quite high (~13 mph). The error can be reduced significantly by performing calibration and scale factor determination fully manually. However, since it requires full manual intervention, it is difficult to scale the algorithm across multiple cameras. In the future, the detection task can be improved by training the model on a larger dataset, and further work can be done to improve speed estimation by extending automatic camera calibration to automatic scale estimation, which would improve accuracy simultaneously

    Traffic Analysis from Video

    Get PDF
    V rámci této práce byl navržen a implementován systém pro analýzu dopravy z videa. Tento system umožňuje detekovat, sledovat a klasifikovat automobily. Systém je schopný detekovat pruhy z pohybu projíždějících automobilů a také je možné určit, zdali daný automobil jede v protisměru. Rychlost projíždějících automobilů je také měřena. Pro funkčnost systému není vyžadován žadný manuální vstup nebo kalibrace kamery, jelikož kamera je plně automacky zkalibrována pomocí úběžníků. Navržený systém pracuje s velkou přesností detekce, sledování a klasifikace automobilů a také rychlost automobilů je měřena s~malou chybou. Systém je schopný pracovat v reálném čase a je aktuálně využíván pro nepřetržité online sledování dopravy. Největším přínosem této práce je plně automatické měření rychlostí projíždějích vozidel.A system for traffic analysis was designed and implemented during work on this thesis. The system is able to detect, track and classify vehicles. Also, the system is able to detect lanes or determine whether a vehicle is passing in wrong way. The speed of observed vehicles is also measured. The system does not require any manual input or calibration whatsoever as the video camera is fully automatically calibrated by detected vanishing points. The accuracy of the detection, tracking and classification is high and the speed of vehicles is measured with a low error. The system runs in real time and it is currently used for a~continuous monitoring of traffic. The main contribution of the thesis is the fully automated speed measurement of passing vehicles.
    corecore