3,994 research outputs found

    Detecting Distracted Driving with Deep Learning

    Get PDF
    © Springer International Publishing AG 2017Driver distraction is the leading factor in most car crashes and near-crashes. This paper discusses the types, causes and impacts of distracted driving. A deep learning approach is then presented for the detection of such driving behaviors using images of the driver, where an enhancement has been made to a standard convolutional neural network (CNN). Experimental results on Kaggle challenge dataset have confirmed the capability of a convolutional neural network (CNN) in this complicated computer vision task and illustrated the contribution of the CNN enhancement to a better pattern recognition accuracy.Peer reviewe

    Video surveillance for monitoring driver's fatigue and distraction

    Get PDF
    Fatigue and distraction effects in drivers represent a great risk for road safety. For both types of driver behavior problems, image analysis of eyes, mouth and head movements gives valuable information. We present in this paper a system for monitoring fatigue and distraction in drivers by evaluating their performance using image processing. We extract visual features related to nod, yawn, eye closure and opening, and mouth movements to detect fatigue as well as to identify diversion of attention from the road. We achieve an average of 98.3% and 98.8% in terms of sensitivity and specificity for detection of driver's fatigue, and 97.3% and 99.2% for detection of driver's distraction when evaluating four video sequences with different drivers

    Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification

    Full text link
    Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note: text overlap with arXiv:1507.0476

    Multi-tasking uncovers right spatial neglect and extinction in chronic left-hemisphere stroke patients

    Get PDF
    open7noUnilateral Spatial Neglect, the most dramatic manifestation of contralesional space unawareness, is a highly heterogeneous syndrome. The presence of neglect is related to core spatially lateralized deficits, but its severity is also modulated by several domain-general factors (such as alertness or sustained attention) and by task demands. We previously showed that a computer-based dual-task paradigm exploiting both lateralized and non-lateralized factors (i.e., attentional load/multitasking) better captures this complex scenario and exacerbates deficits for the contralesional space after right hemisphere damage. Here we asked whether multitasking would reveal contralesional spatial disorders in chronic left hemisphere damaged (LHD) stroke patients, a population in which impaired spatial processing is thought to be uncommon. Ten consecutive LHD patients with no signs of right-sided neglect at standard neuropsychological testing performed a computerized spatial monitoring task with and without concurrent secondary tasks (i.e., multitasking). Severe contralesional (right) space unawareness emerged in most patients under attentional load in both the visual and auditory modalities. Multitasking affected the detection of contralesional stimuli both when presented concurrently with an ipsilesional one (i.e., extinction for bilateral targets) and when presented in isolation (i.e., left neglect for right-sided targets). No spatial bias emerged in a control group of healthy elderly participants, who performed at ceiling, as well as in a second control group composed of patients with Mild Cognitive Impairment. We conclude that the pathological spatial asymmetry in LHD patients cannot be attributed to a global reduction of cognitive resources but it is the consequence of unilateral brain damage. Clinical and theoretical implications of the load-dependent lack of awareness for contralesional hemispace following LHD are discussed.embargoed_20180601Blini, Elvio; Romeo, Zaira; Spironelli, Chiara; Pitteri, Marco; Meneghello, Francesca; Bonato, Mario; Zorzi, MarcoBlini, ELVIO ADALBERTO; Romeo, Zaira; Spironelli, Chiara; Pitteri, Marco; Meneghello, Francesca; Bonato, Mario; Zorzi, Marc
    corecore