83,831 research outputs found

    Development of Automated Incident Detection System Using Existing ATMS CCTV

    Get PDF
    Indiana Department of Transportation (INDOT) has over 300 digital cameras along highways in populated areas in Indiana. These cameras are used to monitor traffic conditions around the clock, all year round. Currently, the videos from these cameras are observed by human operators. The main objective of this research is to develop an automatic real-time system to monitor traffic conditions using the INDOT CCTV video feeds by a collaborative research team of the Transportation Active Safety Institute (TASI) at Indiana University-Purdue University Indianapolis (IUPUI) and the Traffic Management Center (TMC) of INDOT. In this project, the research team developed the system architecture based on a detailed system requirement analysis. The first prototype of major system components of the system has been implemented. Specifically, the team has successfully accomplished the following: An AI based deep learning algorithm provided in YOLO3 is selected for vehicle detection which generates the best results for daytime videos. The tracking information of moving vehicles is used to derive the locations of roads and lanes. A database is designed as the center place to gather and distribute the information generated from all camera videos. The database provides all information for the traffic incident detection. A web-based Graphical User Interface (GUI) was developed. The automatic traffic incident detection will be implemented after the traffic flow information being derived accurately. The research team is currently in the process of integrating the prototypes of all components of the system together to establish a complete system prototype

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents

    Traffic monitoring using image processing : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Information and Telecommunications Engineering at Massey University, Palmerston North, New Zealand

    Get PDF
    Traffic monitoring involves the collection of data describing the characteristics of vehicles and their movements. Such data may be used for automatic tolls, congestion and incident detection, law enforcement, and road capacity planning etc. With the recent advances in Computer Vision technology, videos can be analysed automatically and relevant information can be extracted for particular applications. Automatic surveillance using video cameras with image processing technique is becoming a powerful and useful technology for traffic monitoring. In this research project, a video image processing system that has the potential to be developed for real-time application is developed for traffic monitoring including vehicle tracking, counting, and classification. A heuristic approach is applied in developing this system. The system is divided into several parts, and several different functional components have been built and tested using some traffic video sequences. Evaluations are carried out to show that this system is robust and can be developed towards real-time applications

    Computer supported estimation of input data for transportation models

    Get PDF
    Control and management of transportation systems frequently rely on optimization or simulation methods based on a suitable model. Such a model uses optimization or simulation procedures and correct input data. The input data define transportation infrastructure and transportation flows. Data acquisition is a costly process and so an efficient approach is highly desirable. The infrastructure can be recognized from drawn maps using segmentation, thinning and vectorization. The accurate definition of network topology and nodes position is the crucial part of the process. Transportation flows can be analyzed as vehicle’s behavior based on video sequences of typical traffic situations. Resulting information consists of vehicle position, actual speed and acceleration along the road section. Data for individual vehicles are statistically processed and standard vehicle characteristics can be recommended for vehicle generator in simulation models

    Measuring traffic flow and lane changing from semi-automatic video processing

    Get PDF
    Comprehensive databases are needed in order to extend our knowledge on the behavior of vehicular traffic. Nevertheless data coming from common traffic detectors is incomplete. Detectors only provide vehicle count, detector occupancy and speed at discrete locations. To enrich these databases additional measurements from other data sources, like video recordings, are used. Extracting data from videos by actually watching the entire length of the recordings and manually counting is extremely time-consuming. The alternative is to set up an automatic video detection system. This is also costly in terms of money and time, and generally does not pay off for sporadic usage on a pilot test. An adaptation of the semi-automatic video processing methodology proposed by Patire (2010) is presented here. It makes possible to count flow and lane changes 90% faster than actually counting them by looking at the video. The method consists in selecting some specific lined pixels in the video, and converting them into a set of space – time images. The manual time is only spent in counting from these images. The method is adaptive, in the sense that the counting is always done at the maximum speed, not constrained by the video playback speed. This allows going faster when there are a few counts and slower when a lot of counts happen. This methodology has been used for measuring off-ramp flows and lane changing at several locations in the B-23 freeway (Soriguera & Sala, 2014). Results show that, as long as the video recordings fulfill some minimum requirements in framing and quality, the method is easy to use, fast and reliable. This method is intended for research purposes, when some hours of video recording have to be analyzed, not for long term use in a Traffic Management Center.Postprint (published version

    Deteksi Kantuk pada Pengemudi Berdasarkan Penginderaan Wajah Menggunakan PCA dan SVM

    Get PDF
    Drowsiness while driving is one of the main causes of traffic accidents it affects the level of focus of the driver. Therefore, we need an automatic drowsiness detection mechanism for the driver to provide a warning or alarm so that an accident can be avoided. In this study, we design and simulate a system to detect drowsiness through the driver’s yawn expression. The acquisition is made by recording the face from two shooting points including the dashboard and front mirrors in the car. From the video recording, then it is taken into several images with a size of 128x82 pixels which are used as training and testing data. This image is then processed using Principal Component Analysis (PCA) for feature extraction and classified using a Support Vector Machine (SVM). From the tests carried out, the system generates the highest accuracy of 98%. This best performance is obtained by SVM with polynomial kernel in the camera position on the dashboard. Meanwhile, based on compression testing, the image that can still meet system requirements is 25% of the original size. It is hoped that the proposed drowsiness detection method in this study can be applied for real-time drowsiness detection in vehicles.
    corecore