155 research outputs found
FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed Estimation Using Traffic Cameras
Estimating the speed of vehicles using traffic cameras is a crucial task for
traffic surveillance and management, enabling more optimal traffic flow,
improved road safety, and lower environmental impact. Transportation-dependent
systems, such as for navigation and logistics, have great potential to benefit
from reliable speed estimation. While there is prior research in this area
reporting competitive accuracy levels, their solutions lack reproducibility and
robustness across different datasets. To address this, we provide a novel
framework for automatic real-time vehicle speed calculation, which copes with
more diverse data from publicly available traffic cameras to achieve greater
robustness. Our model employs novel techniques to estimate the length of road
segments via depth map prediction. Additionally, our framework is capable of
handling realistic conditions such as camera movements and different video
stream inputs automatically. We compare our model to three well-known models in
the field using their benchmark datasets. While our model does not set a new
state of the art regarding prediction performance, the results are competitive
on realistic CCTV videos. At the same time, our end-to-end pipeline offers more
consistent results, an easier implementation, and better compatibility. Its
modular structure facilitates reproducibility and future improvements
Vision-based traffic surveys in urban environments
This paper presents a state-of-the-art, vision-based vehicle detection and type classification to perform traffic surveys from a roadside closed-circuit television camera. Vehicles are detected using background subtraction based on a Gaussian mixture model that can cope with vehicles that become stationary over a significant period of time. Vehicle silhouettes are described using a combination of shape and appearance features using an intensity-based pyramid histogram of orientation gradients (HOG). Classification is performed using a support vector machine, which is trained on a small set of hand-labeled silhouette exemplars. These exemplars are identified using a model-based preclassifier that utilizes calibrated images mapped by Google Earth to provide accurately surveyed scene geometry matched to visible image landmarks. Kalman filters track the vehicles to enable classification by majority voting over several consecutive frames. The system counts vehicles and separates them into four categories: car, van, bus, and motorcycle (including bicycles). Experiments with real-world data have been undertaken to evaluate system performance and vehicle detection rates of 96.45% and classification accuracy of 95.70% have been achieved on this data.The authors gratefully acknowledge the Royal Borough of Kingston for providing the video data. S.A. Velastin is grateful to funding received from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement nÂş 600371, el Ministerio de EconomĂa y Competitividad (COFUND2013-51509) and Banco Santander
UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking
In recent years, numerous effective multi-object tracking (MOT) methods are
developed because of the wide range of applications. Existing performance
evaluations of MOT methods usually separate the object tracking step from the
object detection step by using the same fixed object detection results for
comparisons. In this work, we perform a comprehensive quantitative study on the
effects of object detection accuracy to the overall MOT performance, using the
new large-scale University at Albany DETection and tRACking (UA-DETRAC)
benchmark dataset. The UA-DETRAC benchmark dataset consists of 100 challenging
video sequences captured from real-world traffic scenes (over 140,000 frames
with rich annotations, including occlusion, weather, vehicle category,
truncation, and vehicle bounding boxes) for object detection, object tracking
and MOT system. We evaluate complete MOT systems constructed from combinations
of state-of-the-art object detection and object tracking methods. Our analysis
shows the complex effects of object detection accuracy on MOT system
performance. Based on these observations, we propose new evaluation tools and
metrics for MOT systems that consider both object detection and object tracking
for comprehensive analysis.Comment: 18 pages, 11 figures, accepted by CVI
Survey on video anomaly detection in dynamic scenes with moving cameras
The increasing popularity of compact and inexpensive cameras, e.g.~dash
cameras, body cameras, and cameras equipped on robots, has sparked a growing
interest in detecting anomalies within dynamic scenes recorded by moving
cameras. However, existing reviews primarily concentrate on Video Anomaly
Detection (VAD) methods assuming static cameras. The VAD literature with moving
cameras remains fragmented, lacking comprehensive reviews to date. To address
this gap, we endeavor to present the first comprehensive survey on Moving
Camera Video Anomaly Detection (MC-VAD). We delve into the research papers
related to MC-VAD, critically assessing their limitations and highlighting
associated challenges. Our exploration encompasses three application domains:
security, urban transportation, and marine environments, which in turn cover
six specific tasks. We compile an extensive list of 25 publicly-available
datasets spanning four distinct environments: underwater, water surface,
ground, and aerial. We summarize the types of anomalies these datasets
correspond to or contain, and present five main categories of approaches for
detecting such anomalies. Lastly, we identify future research directions and
discuss novel contributions that could advance the field of MC-VAD. With this
survey, we aim to offer a valuable reference for researchers and practitioners
striving to develop and advance state-of-the-art MC-VAD methods.Comment: Under revie
Reconstruction of 3D Information about Vehicles Passing in front of a Surveillance Camera
Tato diplomová práce se zabĂ˝vá 3D rekonstrukcĂ vozidel projĂĹľdÄ›jĂcĂch pĹ™ed dohledovou kamerou. V práci je nejprve pĹ™edstavena kalibrace dohledovĂ© kamery a souvislost automatickĂ© kalibrace s 3D informacemi o sledovanĂ© dopravÄ›. Dále jsou pĹ™edstaveny algoritmy Structure from Motion a SLAM, spoleÄŤnÄ› s metodami pro odhad optickĂ©ho toku. Za účelem prozkoumánĂ chovánĂ pro snĂmky projĂĹľdÄ›jĂcĂch vozidel jsou provedeny experimenty s vĂ˝poÄŤtem korespondencĂ a algoritmem Structure from Motion. NáslednÄ› je postup algoritmu Structure from Motion upraven. SIFT pĹ™Ăznaky jsou nahrazeny algoritmem DeepMatching za účelem zĂskánĂ hustĂ˝ch bodovĂ˝ch korespondencĂ pro následnou fázi rekonstrukce. RekonstruovanĂ© modely jsou dále zpĹ™esnÄ›ny aplikovánĂm dodateÄŤnĂ˝ch omezenĂ, která jsou specifická pro rekonstrukci projĂĹľdÄ›jĂcĂch vozidel. ZĂskanĂ© modely jsou potĂ© vyhodnoceny. VeškerĂ© zjištÄ›nĂ© poznatky a informace o rekonstrukci vozidel jsou pak vyuĹľity k navrĹľenĂ dalšĂch modifikacĂ, kterĂ© by vedly k vytvoĹ™enĂ zcela vlastnĂho rekonstrukÄŤnĂho postupu, specializovanĂ©ho pĹ™Ămo pro 3D rekonstrukci projĂĹľdÄ›jĂcĂch vozidel.This master's thesis focuses on 3D reconstruction of vehicles passing in front of a traffic surveillance camera. Calibration process of surveillance camera is first introduced and the relation of automatic calibration with 3D information about observed traffic is described. Furthermore, Structure from Motion, SLAM, and optical flow algorithms are presented. A set of experiments with feature matching and the Structure from Motion algorithm is carried out to examine results on images of passing vehicles. Afterwards, the Structure from Motion pipeline is modified. Instead of using SIFT features, DeepMatching algorithm is utilized to obtain quasi-dense point correspondences for the subsequent reconstruction phase. Afterwards, reconstructed models are refined by applying additional constraints specific to the vehicle reconstruction task. The resultant models are then evaluated. Lastly, observations and acquired information about the process of vehicle reconstruction are utilized to form proposals for prospective design of an entirely custom pipeline that would be specialized for 3D reconstruction of passing vehicles.
Webcams for Bird Detection and Monitoring: A Demonstration Study
Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step
Real-time vehicle speed estimation using Unmanned Aerial Vehicles for traffic surveillance
Drones are an emerging tool for traffic surveillance; however, they inherently lack the capability to solely obtain vehicle speed on the road. This Bachelor's thesis presents the design, implementation and study of a system to detect the position, velocity and type of vehicles using the video stream obtained from drones. The solution is created to be used with any kind of aerial vehicle but is tailored for the drones in the European project LABYRINTH, of which the thesis has been a part. The tool utilizes the video feed from a sole camera and the telemetry data from the drone to detect, track and project the objects present on the road from the image into reality. This allows for an estimation of their position and speed. The detection and tracking algorithm implemented is the Simple Online Real Time algorithm, which is often referred to as SORT. Once the position has been acquired, another stream is generated that displays the same video, but with the bounding boxes, velocity and confidence ratings of all identified vehicles, with an overall computing time lower than the frame rate. After implementation, the tool underwent testing in a simulated environment to determine its assets and shortcomings, and was used during the LABYRINTH traffic monitoring flight tests. The Bachelor's thesis achieves the aimed objectives with minimum resource utilization, using readily available logic and open-source software to strike an optimal balance between real-time functionality and precise detection of vehicle position.Outgoin
- …