1,687 research outputs found
A Review of Driver Gaze Estimation and Application in Gaze Behavior Understanding
Driver gaze plays an important role in different gaze-based applications such
as driver attentiveness detection, visual distraction detection, gaze behavior
understanding, and building driver assistance system. The main objective of
this study is to perform a comprehensive summary of driver gaze fundamentals,
methods to estimate driver gaze, and it's applications in real world driving
scenarios. We first discuss the fundamentals related to driver gaze, involving
head-mounted and remote setup based gaze estimation and the terminologies used
for each of these data collection methods. Next, we list out the existing
benchmark driver gaze datasets, highlighting the collection methodology and the
equipment used for such data collection. This is followed by a discussion of
the algorithms used for driver gaze estimation, which primarily involves
traditional machine learning and deep learning based techniques. The estimated
driver gaze is then used for understanding gaze behavior while maneuvering
through intersections, on-ramps, off-ramps, lane changing, and determining the
effect of roadside advertising structures. Finally, we have discussed the
limitations in the existing literature, challenges, and the future scope in
driver gaze estimation and gaze-based applications
Attention estimation by simultaneous analysis of viewer and view
Abstract — This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the driver’s gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time. I
Estimation of Driver's Gaze Region from Head Position and Orientation using Probabilistic Confidence Regions
A smart vehicle should be able to understand human behavior and predict their
actions to avoid hazardous situations. Specific traits in human behavior can be
automatically predicted, which can help the vehicle make decisions, increasing
safety. One of the most important aspects pertaining to the driving task is the
driver's visual attention. Predicting the driver's visual attention can help a
vehicle understand the awareness state of the driver, providing important
contextual information. While estimating the exact gaze direction is difficult
in the car environment, a coarse estimation of the visual attention can be
obtained by tracking the position and orientation of the head. Since the
relation between head pose and gaze direction is not one-to-one, this paper
proposes a formulation based on probabilistic models to create salient regions
describing the visual attention of the driver. The area of the predicted region
is small when the model has high confidence on the prediction, which is
directly learned from the data. We use Gaussian process regression (GPR) to
implement the framework, comparing the performance with different regression
formulations such as linear regression and neural network based methods. We
evaluate these frameworks by studying the tradeoff between spatial resolution
and accuracy of the probability map using naturalistic recordings collected
with the UTDrive platform. We observe that the GPR method produces the best
result creating accurate predictions with localized salient regions. For
example, the 95% confidence region is defined by an area that covers 3.77%
region of a sphere surrounding the driver.Comment: 13 Pages, 12 figures, 2 table
- …