2,121 research outputs found

    Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification

    Full text link
    Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note: text overlap with arXiv:1507.0476

    Multimodal Polynomial Fusion for Detecting Driver Distraction

    Full text link
    Distracted driving is deadly, claiming 3,477 lives in the U.S. in 2015 alone. Although there has been a considerable amount of research on modeling the distracted behavior of drivers under various conditions, accurate automatic detection using multiple modalities and especially the contribution of using the speech modality to improve accuracy has received little attention. This paper introduces a new multimodal dataset for distracted driving behavior and discusses automatic distraction detection using features from three modalities: facial expression, speech and car signals. Detailed multimodal feature analysis shows that adding more modalities monotonically increases the predictive accuracy of the model. Finally, a simple and effective multimodal fusion technique using a polynomial fusion layer shows superior distraction detection results compared to the baseline SVM and neural network models.Comment: INTERSPEECH 201

    Human Automotive Interaction: Affect Recognition for Motor Trend Magazine\u27s Best Driver Car of the Year

    Get PDF
    Observation analysis of vehicle operators has the potential to address the growing trend of motor vehicle accidents. Methods are needed to automatically detect heavy cognitive load and distraction to warn drivers in poor psychophysiological state. Existing methods to monitor a driver have included prediction from steering behavior, smart phone warning systems, gaze detection, and electroencephalogram. We build upon these approaches by detecting cues that indicate inattention and stress from video. The system is tested and developed on data from Motor Trend Magazine\u27s Best Driver Car of the Year 2014 and 2015. It was found that face detection and facial feature encoding posed the most difficult challenges to automatic facial emotion recognition in practice. The chapter focuses on two important parts of the facial emotion recognition pipeline: (1) face detection and (2) facial appearance features. We propose a face detector that unifies state‐of‐the‐art approaches and provides quality control for face detection results, called reference‐based face detection. We also propose a novel method for facial feature extraction that compactly encodes the spatiotemporal behavior of the face and removes background texture, called local anisotropic‐inhibited binary patterns in three orthogonal planes. Real‐world results show promise for the automatic observation of driver inattention and stress

    Efficient and Robust Driver Fatigue Detection Framework Based on the Visual Analysis of Eye States

    Get PDF
    Fatigue detection based on vision is widely employed in vehicles due to its real-time and reliable detection results. With the coronavirus disease (COVID-19) outbreak, many proposed detection systems based on facial characteristics would be unreliable due to the face covering with the mask. In this paper, we propose a robust visual-based fatigue detection system for monitoring drivers, which is robust regarding the coverings of masks, changing illumination and head movement of drivers. Our system has three main modules: face key point alignment, fatigue feature extraction and fatigue measurement based on fused features. The innovative core techniques are described as follows: (1) a robust key point alignment algorithm by fusing global face information and regional eye information, (2) dynamic threshold methods to extract fatigue characteristics and (3) a stable fatigue measurement based on fusing percentage of eyelid closure (PERCLOS) and proportion of long closure duration blink (PLCDB). The excellent performance of our proposed algorithm and methods are verified in experiments. The experimental results show that our key point alignment algorithm is robust to different scenes, and the performance of our proposed fatigue measurement is more reliable due to the fusion of PERCLOS and PLCDB
    corecore