4 research outputs found

    Investigating Attention Modeling Differences between Older and Younger Drivers

    Get PDF
    As in-vehicle technologies (IVTs) grow in both popularity and complexity, the question of whether these IVTs improve, or hinder, driver performance has gained more attention. The ability to predict when a driver will be looking at the road or a display on the car’s dashboard or center console is crucial to understanding the impact of the recent tech-heavy trend in car designs on safety and the extent to which IVTs compete with the primary driving task for visual resources. The SEEV model of visual attention has been shown to be able to predict the probability of attending an area if interest (AOI) while driving based on the salience (SEEV-S) of visual stimuli, the effort (SEEV-Ef) required to shift attention between locations, the expectancy (SEEV-Ex) that information will be found at a specific location within the visual field, and the value (SEEV-V) of the information found at that location relative to the task(s) being performed. This study compared older and younger adult SEEV models calculated using eye tracking during a series of simulated driving scenarios with differing levels of effort, expectancy, and value placed on the primary driving task and a secondary in-vehicle task (IVT) to be done on the center console while maintaining lane position and speed. No significant effect of the effort variable was found, likely due to the cues used in our experiment not requiring head or torso rotation to access. Good model fits for both older and younger adults were found, with younger adults having greater weight on the dashboard AOI than older adults when the driving task was prioritized

    Robust Validation of Visual Focus of Attention using Adaptive Fusion of Head and Eye Gaze patterns

    No full text
    We propose a framework for inferring the focus of attention of a person, utilizing information coming both from head rotation and eye gaze estimation. To this aim, we use fuzzy logic to estimate confidence on the gaze of a person towards a specific point, and results are compared to human annotation. For head pose we propose Bayesian modality fusion of both local and holistic information, while for eye gaze we propose a methodology that calculates eye gaze directionality, removing the influence of head rotation, using a simple camera. For local information, feature positions are used, while holistic information makes use of face region. Holistic information uses Convolutional Neural Networks which have been shown to be immune to small translations and distortions of test data. This is vital for an application in an unpretending environment, where background noise should be expected. The ability of the system to estimate focus of attention towards specific areas, for unknown users, is grounded at the end of the paper. 1

    Robust validation of Visual Focus of Attention using adaptive fusion of head and eye gaze patterns

    No full text
    corecore