2,654 research outputs found

    VRpursuits: Interaction in Virtual Reality Using Smooth Pursuit Eye Movements

    Get PDF
    Gaze-based interaction using smooth pursuit eye movements (Pursuits) is attractive given that it is intuitive and overcomes the Midas touch problem. At the same time, eye tracking is becoming increasingly popular for VR applications. While Pursuits was shown to be effective in several interaction contexts, it was never explored in-depth for VR before. In a user study (N=26), we investigated how parameters that are specific to VR settings influence the performance of Pursuits. For example, we found that Pursuits is robust against different sizes of virtual 3D targets. However performance improves when the trajectory size (e.g., radius) is larger, particularly if the user is walking while interacting. While walking, selecting moving targets via Pursuits is generally feasible albeit less accurate than when stationary. Finally, we discuss the implications of these findings and the potential of smooth pursuits for interaction in VR by demonstrating two sample use cases: 1) gaze-based authentication in VR, and 2) a space meteors shooting game

    Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems

    Get PDF
    World-wide injuries in vehicle accidents have been on the rise in recent years, mainly due to driver error. The main objective of this research is to develop a predictive system for driving maneuvers by analyzing the cognitive behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle is being driven). Advanced Driving Assistance Systems (ADAS) include different driving functions, such as vehicle parking, lane departure warning, blind spot detection, and so on. While much research has been performed on developing automated co-driver systems, little attention has been paid to the fact that the driver plays an important role in driving events. Therefore, it is crucial to monitor events and factors that directly concern the driver. As a goal, we perform a quantitative and qualitative analysis of driver behavior to find its relationship with driver intentionality and driving-related actions. We have designed and developed an instrumented vehicle (RoadLAB) that is able to record several synchronized streams of data, including the surrounding environment of the driver, vehicle functions and driver cephalo-ocular behavior, such as gaze/head information. We subsequently analyze and study the behavior of several drivers to find out if there is a meaningful relation between driver behavior and the next driving maneuver

    Vehicular Instrumentation and Data Processing for the Study of Driver Intent

    Get PDF
    The primary goal of this thesis is to provide processed experimental data needed to determine whether driver intentionality and driving-related actions can be predicted from quantitative and qualitative analysis of driver behaviour. Towards this end, an instrumented experimental vehicle capable of recording several synchronized streams of data from the surroundings of the vehicle, the driver gaze with head pose and the vehicle state in a naturalistic driving environment was designed and developed. Several driving data sequences in both urban and rural environments were recorded with the instrumented vehicle. These sequences were automatically annotated for relevant artifacts such as lanes, vehicles and safely driveable areas within road lanes. A framework and associated algorithms required for cross-calibrating the gaze tracking system with the world coordinate system mounted on the outdoor stereo system was also designed and implemented, allowing the mapping of the driver gaze with the surrounding environment. This instrumentation is currently being used for the study of driver intent, geared towards the development of driver maneuver prediction models

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Face pose estimation with automatic 3D model creation for a driver inattention monitoring application

    Get PDF
    Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings

    Computational driver behavior models for vehicle safety applications

    Get PDF
    The aim of this thesis is to investigate how human driving behaviors can be formally described in mathematical models intended for online personalization of advanced driver assistance systems (ADAS) or offline virtual safety evaluations. Both longitudinal (braking) and lateral (steering) behaviors in routine driving and emergencies are addressed. Special attention is paid to driver glance behavior in critical situations and the role of peripheral vision.First, a hybrid framework based on autoregressive models with exogenous input (ARX-models) is employed to predict and classify driver control in real time. Two models are suggested, one targeting steering behavior and the other longitudinal control behavior. Although the predictive performance is unsatisfactory, both models can distinguish between different driving styles.Moreover, a basic model for drivers\u27 brake initiation and modulation in critical longitudinal situations (specifically for rear-end conflicts) is constructed. The model is based on a conceptual framework of noisy evidence accumulation and predictive processing. Several model extensions related to gaze behavior are also proposed and successfully fitted to real-world crashes and near-crashes. The influence of gaze direction is further explored in a driving simulator study, showing glance response times to be independent of the glance\u27s visual eccentricity, while brake response times increase for larger gaze angles, as does the rate of missed target detections.Finally, the potential of a set of metrics to quantify subjectively perceived risk in lane departure situations to explain drivers\u27 recovery steering maneuvers was investigated. The most influential factors were the relative yaw angle and splay angle error at steering initiation. Surprisingly, it was observed that drivers often initiated the recovery steering maneuver while looking off-road.To sum up, the proposed models in this thesis facilitate the development of personalized ADASs and contribute to trustworthy virtual evaluations of current, future, and conceptual safety systems. The insights and ideas contribute to an enhanced, human-centric system development, verification, and validation process. In the long term, this will likely lead to improved vehicle safety and a reduced number of severe injuries and fatalities in traffic

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Human-Centric Detection and Mitigation Approach for Various Levels of Cell Phone-Based Driver Distractions

    Get PDF
    abstract: Driving a vehicle is a complex task that typically requires several physical interactions and mental tasks. Inattentive driving takes a driver’s attention away from the primary task of driving, which can endanger the safety of driver, passenger(s), as well as pedestrians. According to several traffic safety administration organizations, distracted and inattentive driving are the primary causes of vehicle crashes or near crashes. In this research, a novel approach to detect and mitigate various levels of driving distractions is proposed. This novel approach consists of two main phases: i.) Proposing a system to detect various levels of driver distractions (low, medium, and high) using a machine learning techniques. ii.) Mitigating the effects of driver distractions through the integration of the distracted driving detection algorithm and the existing vehicle safety systems. In phase- 1, vehicle data were collected from an advanced driving simulator and a visual based sensor (webcam) for face monitoring. In addition, data were processed using a machine learning algorithm and a head pose analysis package in MATLAB. Then the model was trained and validated to detect different human operator distraction levels. In phase 2, the detected level of distraction, time to collision (TTC), lane position (LP), and steering entropy (SE) were used as an input to feed the vehicle safety controller that provides an appropriate action to maintain and/or mitigate vehicle safety status. The integrated detection algorithm and vehicle safety controller were then prototyped using MATLAB/SIMULINK for validation. A complete vehicle power train model including the driver’s interaction was replicated, and the outcome from the detection algorithm was fed into the vehicle safety controller. The results show that the vehicle safety system controller reacted and mitigated the vehicle safety status-in closed loop real-time fashion. The simulation results show that the proposed approach is efficient, accurate, and adaptable to dynamic changes resulting from the driver, as well as the vehicle system. This novel approach was applied in order to mitigate the impact of visual and cognitive distractions on the driver performance.Dissertation/ThesisDoctoral Dissertation Applied Psychology 201

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    • …
    corecore