23 research outputs found
Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification
Accurate, robust, inexpensive gaze tracking in the car can help keep a driver
safe by facilitating the more effective study of how to improve (1) vehicle
interfaces and (2) the design of future Advanced Driver Assistance Systems. In
this paper, we estimate head pose and eye pose from monocular video using
methods developed extensively in prior work and ask two new interesting
questions. First, how much better can we classify driver gaze using head and
eye pose versus just using head pose? Second, are there individual-specific
gaze strategies that strongly correlate with how much gaze classification
improves with the addition of eye pose information? We answer these questions
by evaluating data drawn from an on-road study of 40 drivers. The main insight
of the paper is conveyed through the analogy of an "owl" and "lizard" which
describes the degree to which the eyes and the head move when shifting gaze.
When the head moves a lot ("owl"), not much classification improvement is
attained by estimating eye pose on top of head pose. On the other hand, when
the head stays still and only the eyes move ("lizard"), classification accuracy
increases significantly from adding in eye pose. We characterize how that
accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note:
text overlap with arXiv:1507.0476
Driver frustration detection from audio and video in the wild
We present a method for detecting driver frustration from both video and audio streams captured during the driver's interaction with an in-vehicle voice-based navigation system. The video is of the driver's face when the machine is speaking, and the audio is of the driver's voice when he or she is speaking. We analyze a dataset of 20 drivers that contains 596 audio epochs (audio clips, with duration from 1 sec to 15 sec) and 615 video epochs (video clips, with duration from 1 sec to 45 sec). The dataset is balanced across 2 age groups, 2 vehicle systems, and both genders. The model was subject-independently trained and tested using 4-fold cross-validation. We achieve an accuracy of 77.4% for detecting frustration from a single audio epoch and 81.2% for detecting frustration from a single video epoch. We then treat the video and audio epochs as a sequence of interactions and use decision fusion to characterize the trade-off between decision time and classification accuracy, which improved the prediction accuracy to 88.5% after 9 epochs
European NCAP Program Developments to Address Driver Distraction, Drowsiness and Sudden Sickness
Driver distraction and drowsiness remain significant contributors to death and serious injury on our roads and are long standing issues in road safety strategies around the world. With developments in automotive technology, including driver monitoring, there are now more options available for automotive manufactures to mitigate risks associated with driver state. Such developments in Occupant Status Monitoring (OSM) are being incorporated into the European New Car Assessment Programme (Euro NCAP) Safety Assist protocols. The requirements for OSM technologies are discussed along twodimensions: detection difficulty and behavioral complexity. More capable solutions will be able to provide higher levels of system availability, being the proportion of time a system could provide protection to the driver, and will be able to capture a greater proportion of complex real-word driver behavior. The testing approach could initially propose testing using both a dossier of evidence provided by the Original Equipment Manufacturer (OEM) alongside selected use of track testing. More capable systems will not rely only on warning strategies but will also include intervention strategies when a driver is not attentive. The roadmap for future OSM protocol development could consider a range of known and emerging safety risks including driving while intoxicated by alcohol or drugs, cognitive distraction, and the driver engagement requirements for supervisio
An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles
Nowadays, automobile manufacturers make efforts to develop ways to make cars
fully safe. Monitoring driver's actions by computer vision techniques to detect
driving mistakes in real-time and then planning for autonomous driving to avoid
vehicle collisions is one of the most important issues that has been
investigated in the machine vision and Intelligent Transportation Systems
(ITS). The main goal of this study is to prevent accidents caused by fatigue,
drowsiness, and driver distraction. To avoid these incidents, this paper
proposes an integrated safety system that continuously monitors the driver's
attention and vehicle surroundings, and finally decides whether the actual
steering control status is safe or not. For this purpose, we equipped an
ordinary car called FARAZ with a vision system consisting of four mounted
cameras along with a universal car tool for communicating with surrounding
factory-installed sensors and other car systems, and sending commands to
actuators. The proposed system leverages a scene understanding pipeline using
deep convolutional encoder-decoder networks and a driver state detection
pipeline. We have been identifying and assessing domestic capabilities for the
development of technologies specifically of the ordinary vehicles in order to
manufacture smart cars and eke providing an intelligent system to increase
safety and to assist the driver in various conditions/situations.Comment: 15 pages and 5 figures, Submitted to the international conference on
Contemporary issues in Data Science (CiDaS 2019), Learn more about this
project at https://iasbs.ac.ir/~ansari/fara