1,248 research outputs found

    Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier

    Full text link
    This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles. The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road. At the beginning of the detector, a learnable pre-processing block is implemented, which extracts deep features from input images and calculates the data rarity for each feature. In the next step, drawing inspiration from soft attention, a weighted binary mask is designed that guides the model to focus more on predetermined regions. This research utilizes Convolutional Neural Networks (CNNs) to extract distinguishing characteristics from these areas, then reduces dimensions using Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM) is used to predict the behavior of the vehicles. To train and evaluate the model, a large-scale dataset is collected from two types of dash-cams and Insta360 cameras from the rear view of Ford Motor Company vehicles. This dataset includes over 12k frames captured during both daytime and nighttime hours. To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images. The findings from the experiments demonstrate that the proposed methodology can accurately categorize vehicle behavior with 92.14% accuracy, 97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's Kappa Statistic. Further details are available at https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure

    Low-light Pedestrian Detection in Visible and Infrared Image Feeds: Issues and Challenges

    Full text link
    Pedestrian detection has become a cornerstone for several high-level tasks, including autonomous driving, intelligent transportation, and traffic surveillance. There are several works focussed on pedestrian detection using visible images, mainly in the daytime. However, this task is very intriguing when the environmental conditions change to poor lighting or nighttime. Recently, new ideas have been spurred to use alternative sources, such as Far InfraRed (FIR) temperature sensor feeds for detecting pedestrians in low-light conditions. This study comprehensively reviews recent developments in low-light pedestrian detection approaches. It systematically categorizes and analyses various algorithms from region-based to non-region-based and graph-based learning methodologies by highlighting their methodologies, implementation issues, and challenges. It also outlines the key benchmark datasets that can be used for research and development of advanced pedestrian detection algorithms, particularly in low-light situation

    Detection of road traffic participants using cost-effective arrayed ultrasonic sensors in low-speed traffic situations

    Get PDF
    Effective detection of traffic participants is crucial for driver assistance systems. Traffic safety data reveal that the majority of preventable pedestrian fatalities occurred at night. The lack of light at night may cause dysfunction of sensors like cameras. This paper proposes an alternative approach to detect traffic participants using cost-effective arrayed ultrasonic sensors. Candidate features were extracted from the collected episodes of pedestrians, cyclists, and vehicles. A conditional likelihood maximization method based on mutual information was employed to select an optimized subset of features from the candidates. The belonging probability to each group along with time was determined based on the accumulated object type attributes outputted from a support vector machine classifier at each time step. Results showed an overall detection accuracy of 86%, with correct detection rate of pedestrians, cyclists and vehicles around 85.7%, 76.7% and 93.1%, respectively. The time needed for detection was about 0.8 s which could be further shortened when the distance between objects and sensors was shorter. The effectiveness of arrayed ultrasonic sensors on objects detection would provide all-around-the-clock assistance in low-speed situations for driving safety

    THE ACCURACY OF OBSERVERS\u27 ESTIMATES OF THE EFFECT OF GLARE ON NIGHTTIME VISION: DO WE EXAGGERATE THE DISABLING EFFECTS OF GLARE?

    Get PDF
    Designing headlights involves balancing two conflicting goals: maximizing visibility for the driver and minimizing the disabling effects of glare for other drivers. Complaints of headlight glare have increased recently. This project explored the relationship between subjective (discomfort and expected visual problems) and objective (actual visual problems) consequences of glare. Two experiments - a lab-based psychophysical study and a field study - quantified the accuracy of observers\u27 estimates of the effects of glare on their acuity. In both experiments, participants over-estimated the extent to which glare degraded their ability to see a small high contrast target. Observers\u27 estimates of the disabling effects of glare were more tightly linked with subjective reports of glare-induced visual discomfort than with objective measures of glare-induced visual problems

    Switching GAN-based Image Filters to Improve Perception for Autonomous Driving

    Get PDF
    Autonomous driving holds the potential to increase human productivity, reduce accidents caused by human errors, allow better utilization of roads, reduce traffic accidents and congestion, free up parking space and provide many other advantages. Perception of Autonomous Vehicles (AV) refers to the use of sensors to perceive the world, e.g. using cameras to detect and classify objects. Traffic scene understanding is a key research problem in perception in autonomous driving, and semantic segmentation is a useful method to address this problem. Adverse weather conditions are a reality that AV must contend with. Conditions like rain, snow, haze, etc. can drastically reduce visibility and thus affect computer vision models. Models for perception for AVs are currently designed for and tested on predominantly ideal weather conditions under good illumination. The most complete solution may be to have the segmentation networks be trained on all possible adverse conditions. Thus a dataset to train a segmentation network to make it robust to rain would need to have adequate data that cover these conditions well. Moreover, labeling is an expensive task. It is particularly expensive for semantic segmentation, as each object in a scene needs to be identified and each pixel annotated in the right class. Thus, the adverse weather is a challenging problem for perception models in AVs. This thesis explores the use of Generative Adversarial Networks (GAN) in order to improve semantic segmentation. We design a framework and a methodology to evaluate the proposed approach. The framework consists of an Adversity Detector, and a series of denoising filters. The Adversity Detector is an image classifier that takes as input clear weather or adverse weather scenes, and attempts to predict whether the given image contains rain, or puddles, or other conditions that can adversely affect semantic segmentation. The filters are denoising generative adversarial networks that are trained to remove the adverse conditions from images in order to translate the image to a domain the segmentation network has been trained on, i.e. clear weather images. We use the prediction from the Adversity Detector to choose which GAN filter to use. The methodology we devise for evaluating our approach uses the trained filters to output sets of images that we can then run segmentation tasks on. This, we argue, is a better metric for evaluating the GANs than similarity measures such as SSIM. We also use synthetic data so we can perform systematic evaluation of our technique. We train two kinds of GANs, one that uses paired data (CycleGAN), and one that does not (Pix2Pix). We have concluded that GAN architectures that use unpaired data are not sufficiently good models for denoising. We train the denoising filters using the other architecture and we found them easy to train, and they show good results. While these filters do not show better performance than when we train our segmentation network with adverse weather data, we refer back to the point that training the segmentation network requires labelled data which is expensive to collect and annotate, particularly for adverse weather and lighting conditions. We implement our proposed framework and report a 17\% increase in performance in segmentation over the baseline results obtained when we do not use our framework

    Tennessee Highway Safety Office Highway Safety Plan FFY 2021

    Get PDF
    https://digitalcommons.memphis.edu/govpubs-tn-safety-homeland-security-highway-safety-office/1003/thumbnail.jp

    Computer vision for advanced driver assistance systems

    Get PDF

    Computer vision for advanced driver assistance systems

    Get PDF
    • …
    corecore