2,875 research outputs found

    Assessment of Driver\u27s Attention to Traffic Signs through Analysis of Gaze and Driving Sequences

    Get PDF
    A driver’s behavior is one of the most significant factors in Advance Driver Assistance Systems. One area that has received little study is just how observant drivers are in seeing and recognizing traffic signs. In this contribution, we present a system considering the location where a driver is looking (points of gaze) as a factor to determine that whether the driver has seen a sign. Our system detects and classifies traffic signs inside the driver’s attentional visual field to identify whether the driver has seen the traffic signs or not. Based on the results obtained from this stage which provides quantitative information, our system is able to determine how observant of traffic signs that drivers are. We take advantage of the combination of Maximally Stable Extremal Regions algorithm and Color information in addition to a binary linear Support Vector Machine classifier and Histogram of Oriented Gradients as features detector for detection. In classification stage, we use a multi class Support Vector Machine for classifier also Histogram of Oriented Gradients for features. In addition to the detection and recognition of traffic signs, our system is capable of determining if the sign is inside the attentional visual field of the drivers. It means the driver has kept his gaze on traffic signs and sees the sign, while if the sign is not inside this area, the driver did not look at the sign and sign has been missed

    Sparse Coding of Weather and Illuminations for ADAS and Autonomous Driving

    Get PDF
    Weather and illumination are critical factors in vision tasks such as road detection, vehicle recognition, and active lighting for autonomous vehicles and ADAS. Understanding the weather and illumination type in a vehicle driving view can guide visual sensing, control vehicle headlight and speed, etc. This paper uses sparse coding technique to identify weather types in driving video, given a set of bases from video samples covering a full spectrum of weather and illumination conditions. We sample traffic and architecture insensitive regions in each video frame for features and obtain clusters of weather and illuminations via unsupervised learning. Then, a set of keys are selected carefully according to the visual appearance of road and sky. For video input, sparse coding of each frame is calculated for representing the vehicle view robustly under a specific illumination. The linear combination of the basis from keys results in weather types for road recognition, active lighting, intelligent vehicle control, etc

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents

    Semantic Segmentation of Road Profiles for Efficient Sensing in Autonomous Driving

    Get PDF
    In vision-based autonomous driving, understanding spatial layout of road and traffic is required at each moment. This involves the detection of road, vehicle, pedestrian, etc. in images. In driving video, the spatial positions of various patterns are further tracked for their motion. This spatial-to-temporal approach inherently demands a large computational resource. In this work, however, we take a temporal-to-spatial approach to cope with fast moving vehicles in autonomous navigation. We sample one-pixel line at each frame in driving video, and the temporal congregation of lines from consecutive frames forms a road profile image. The temporal connection of lines also provides layout information of road and surrounding environment. This method reduces the processing data to a fraction of video in order to catch up vehicle moving speed. The key issue now is to know different regions in the road profile; the road profile is divided in real time to road, roadside, lane mark, vehicle, etc. as well as motion events such as stopping and turning of ego-vehicle. We show in this paper that the road profile can be learned through Semantic Segmentation. We use RGB-F images of the road profile to implement Semantic Segmentation to grasp both individual regions and their spatial relations on road effectively. We have tested our method on naturalistic driving video and the results are promising

    Detecting Vehicle Interactions in Driving Videos via Motion Profiles

    Get PDF
    Identifying interactions of vehicles on the road is important for accident analysis and driving behavior assessment. Our interactions include those with passing/passed, cut-in, crossing, frontal, on-coming, parallel driving vehicles, and ego-vehicle actions to change lane, stop, turn, and speeding. We use visual motion recorded in driving video taken by a dashboard camera to identify such interaction. Motion profiles from videos are filtered at critical positions, which reduces the complexity from object detection, depth sensing, target tracking, and motion estimation. The results are obtained efficiently, and the accuracy is also acceptable. The results can be used in driving video mining, traffic analysis, driver behavior understanding, etc
    • …
    corecore