127,274 research outputs found
Measurement of Road Traffic Parameters Based on Multi-Vehicle Tracking
Development of computing power and cheap video cameras enabled today's
traffic management systems to include more cameras and computer vision
applications for transportation system monitoring and control. Combined with
image processing algorithms cameras are used as sensors to measure road traffic
parameters like flow volume, origin-destination matrices, classify vehicles,
etc. In this paper we propose a system for measurement of road traffic
parameters (basic motion model parameters and macro-scopic traffic parameters).
The system is based on Local Binary Pattern (LBP) image features classification
with a cascade of Gentle Adaboost (GAB) classifiers to determine vehicle
existence and its location in an image. Additionally, vehicle tracking and
counting in a road traffic video is performed by using Extended Kalman Filter
(EKF) and virtual markers. The newly proposed system is compared with a system
based on background subtraction. Comparison is performed by the means of
execution time and accuracy.Comment: Part of the Proceedings of the Croatian Computer Vision Workshop,
CCVW 2015, Year
Vision-based vehicle detection and tracking in intelligent transportation system
This thesis aims to realize vision-based vehicle detection and tracking in the Intelligent Transportation System. First, it introduces the methods for vehicle detection and tracking. Next, it establishes the sensor fusion framework of the system, including dynamic model and sensor model. Then, it simulates the traffic scene at a crossroad by a driving simulator, where the research target is one single car, and the traffic scene is ideal. YOLO Neural Network is applied to the image sequence for vehicle detection. Kalman filter method, extended Kalman filter method, and particle filter method are utilized and compared for vehicle tracking. The Following part is the practical experiment where there are multiple vehicles at the same time, and the traffic scene is in real life with various interference factors. YOLO Neural Network combined with OpenCV is adopted to realize real-time vehicle detection. Kalman filter and extended Kalman filter are applied for vehicle tracking; an identification algorithm is proposed to solve the occlusion of the vehicles. The effects of process noise as well as measurement noise are analysed using variable-controlling approach. Additionally, perspective transformation is illustrated and implemented to transfer the coordinates from the image plane to the ground plane. If the vision-based vehicle detection and tracking can be realized and popularized in daily lives, vehicle information can be shared among infrastructures, vehicles, and users, so as to build interactions inside the Intelligent Transportation System
Traffic signs recognition for detailed digital maps development and driver assistance systems
Digital maps are considered as an additional sensor in many of the new ADAS, but these systems usually require a higher level of accuracy and detail of the maps. Among the important information that the maps should contain are the road geometry and traffic signs. In the first case, it is interesting to use accurate and fast methods for measurement. In the paper, a method based on a datalog vehicle is used. Satellite positioning and inertial measurements systems data are combined and dynamic behavior of the vehicle body is corrected measuring the movements of the suspension system. On the other hand, the information provided by traffic signs and route-guidance signs is extremely important for safe and successful driving. An automatic system that is capable of extracting and identifying these signs automatically would help human drivers enormously; navigation would be easier, allowing them to concentrate on driving the vehicle. A Computer Vision System is used to recognize and classify the different families of traffic signs combining it with GPS information to develop detailed and accurate digital maps. This sign recognition can also be used for real time warnings to the driver. Some results of test carried out in real situations are shown
The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems
Scenario-based testing for the safety validation of highly automated vehicles
is a promising approach that is being examined in research and industry. This
approach heavily relies on data from real-world scenarios to derive the
necessary scenario information for testing. Measurement data should be
collected at a reasonable effort, contain naturalistic behavior of road users
and include all data relevant for a description of the identified scenarios in
sufficient quality. However, the current measurement methods fail to meet at
least one of the requirements. Thus, we propose a novel method to measure data
from an aerial perspective for scenario-based validation fulfilling the
mentioned requirements. Furthermore, we provide a large-scale naturalistic
vehicle trajectory dataset from German highways called highD. We evaluate the
data in terms of quantity, variety and contained scenarios. Our dataset
consists of 16.5 hours of measurements from six locations with 110 000
vehicles, a total driven distance of 45 000 km and 5600 recorded complete lane
changes. The highD dataset is available online at: http://www.highD-dataset.comComment: IEEE International Conference on Intelligent Transportation Systems
(ITSC) 201
Traffic Danger Recognition With Surveillance Cameras Without Training Data
We propose a traffic danger recognition model that works with arbitrary
traffic surveillance cameras to identify and predict car crashes. There are too
many cameras to monitor manually. Therefore, we developed a model to predict
and identify car crashes from surveillance cameras based on a 3D reconstruction
of the road plane and prediction of trajectories. For normal traffic, it
supports real-time proactive safety checks of speeds and distances between
vehicles to provide insights about possible high-risk areas. We achieve good
prediction and recognition of car crashes without using any labeled training
data of crashes. Experiments on the BrnoCompSpeed dataset show that our model
can accurately monitor the road, with mean errors of 1.80% for distance
measurement, 2.77 km/h for speed measurement, 0.24 m for car position
prediction, and 2.53 km/h for speed prediction.Comment: To be published in proceedings of Advanced Video and Signal-based
Surveillance (AVSS), 2018 15th IEEE International Conference on, pp. 378-383,
IEE
Homography-based ground plane detection using a single on-board camera
This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments
Interaction-aware Kalman Neural Networks for Trajectory Prediction
Forecasting the motion of surrounding obstacles (vehicles, bicycles,
pedestrians and etc.) benefits the on-road motion planning for intelligent and
autonomous vehicles. Complex scenes always yield great challenges in modeling
the patterns of surrounding traffic. For example, one main challenge comes from
the intractable interaction effects in a complex traffic system. In this paper,
we propose a multi-layer architecture Interaction-aware Kalman Neural Networks
(IaKNN) which involves an interaction layer for resolving high-dimensional
traffic environmental observations as interaction-aware accelerations, a motion
layer for transforming the accelerations to interaction aware trajectories, and
a filter layer for estimating future trajectories with a Kalman filter network.
Attributed to the multiple traffic data sources, our end-to-end trainable
approach technically fuses dynamic and interaction-aware trajectories boosting
the prediction performance. Experiments on the NGSIM dataset demonstrate that
IaKNN outperforms the state-of-the-art methods in terms of effectiveness for
traffic trajectory prediction.Comment: 8 pages, 4 figures, Accepted for IEEE Intelligent Vehicles Symposium
(IV) 202
- …