134,935 research outputs found

    Eye gaze position before, during and after percept switching of bistable visual stimului

    Full text link
    A bistable visual stimulus, such as the Necker Cube or Rubin’s Vase, can be perceived in two different ways which compete against each other and alternate spontaneously. Percept switch rates have been recorded in past psychophysical experiments, but few experiments have measured percept switches while tracking eye movements in human participants. In our study, we use the Eyelink II system to track eye gaze position during spontaneous percept switches of a bistable, structure-from-motion (SFM) cylinder that can be perceived to be rotating clockwise (CW) or counterclockwise (CCW). Participants reported the perceived direction of rotation of the SFM cylinder using key presses. Reliability of participants’ reports was ensured by including unambiguous rotations. Unambiguous rotation was generated by assigning depth using binocular disparity. Gaze positions were measured 50 – 2000 ms before and after key presses. Our pilot data show that during ambiguous cylinder presentation, gaze positions for CW reports clustered to the left half of the cylinder and gaze positions for CCW reports clustered to the right half of the cylinder between 1000ms before and 1500ms after key presses, but no such correlation was found beyond that timeframe. These results suggest that percept switches can be correlated with prior gaze positions for ambiguous stimuli. Our results further suggest that the mechanism underlying percept initiation may be influenced by the visual hemifield where the ambiguous stimulus is located.Published versio

    Gaze-contingent manipulation of color perception

    Get PDF
    Using real time eye tracking, gaze-contingent displays can modify their content to represent depth (e.g., through additional depth cues) or to increase rendering performance (e.g., by omitting peripheral detail). However, there has been no research to date exploring how gaze-contingent displays can be leveraged for manipulating perceived color. To address this, we conducted two experiments (color matching and sorting) that manipulated peripheral background and object colors to influence the user's color perception. Findings from our color matching experiment suggest that we can use gaze-contingent simultaneous contrast to affect color appearance and that existing color appearance models might not fully predict perceived colors with gaze-contingent presentation. Through our color sorting experiment we demonstrate how gaze-contingent adjustments can be used to enhance color discrimination. Gaze-contingent color holds the promise of expanding the perceived color gamut of existing display technology and enabling people to discriminate color with greater precision.Postprin

    Gaze Estimation Based on Multi-view Geometric Neural Networks

    Get PDF
    Gaze and head pose estimation can play essential roles in various applications, such as human attention recognition and behavior analysis. Most of the deep neural network-based gaze estimation techniques use supervised regression techniques where features are extracted from eye images by neural networks and regress 3D gaze vectors. I plan to apply the geometric features of the eyes to determine the gaze vectors of observers relying on the concepts of 3D multiple view geometry. We develop an end to-end CNN framework for gaze estimation using 3D geometric constraints under semi-supervised and unsupervised settings and compare the results. We explore the mathematics behind the concepts of Homography and Structure-from- Motion and extend it to the gaze estimation problem using the eye region landmarks. We demonstrate the necessity of the application of 3D eye region landmarks for implementing the 3D geometry-based algorithms and address the problem when lacking the depth parameters in the gaze estimation datasets. We further explore the use of Convolutional Neural Networks (CNNs) to develop an end-to-end learning-based framework, which takes in sequential eye images to estimate the relative gaze changes of observers. We use a depth network for performing monocular image depth estimation of the eye region landmarks, which are further utilized by the pose network to estimate the relative gaze change using view synthesis constraints of the iris regions. We further explore CNN frameworks to estimate the relative changes in homography matrices between sequential eye images based on the eye region landmarks to estimate the pose of the iris and hence determine the relative change in the gaze of the observer. We compare and analyze the results obtained from mathematical calculations and deep neural network-based methods. We further compare the performance of the proposed CNN scheme with the state-of-the-art regression-based methods for gaze estimation. Future work involves extending the end-to-end pipeline as an unsupervised framework for gaze estimation in the wild

    Customer Gaze Estimation in Retail Using Deep Learning

    Get PDF
    At present, intelligent computing applications are widely used in different domains, including retail stores. The analysis of customer behaviour has become crucial for the benefit of both customers and retailers. In this regard, the concept of remote gaze estimation using deep learning has shown promising results in analyzing customer behaviour in retail due to its scalability, robustness, low cost, and uninterrupted nature. This study presents a three-stage, three-attention-based deep convolutional neural network for remote gaze estimation in retail using image data. In the first stage, we design a mechanism to estimate the 3D gaze of the subject using image data and monocular depth estimation. The second stage presents a novel three-attention mechanism to estimate the gaze in the wild from field-of-view, depth range, and object channel attentions. The third stage generates the gaze saliency heatmap from the output attention map of the second stage. We train and evaluate the proposed model using benchmark GOO-Real dataset and compare results with baseline models. Further, we adapt our model to real-retail environments by introducing a novel Retail Gaze dataset. Extensive experiments demonstrate that our approach significantly improves remote gaze target estimation performance on GOO-Real and Retail Gaze datasets

    Probing the time course of facilitation and inhibition in gaze cueing of attention in an upper-limb reaching task

    Get PDF
    Previous work has revealed that social cues, such as gaze and pointed fingers, can lead to a shift in the focus of another person’s attention. Research investigating the mechanisms of these shifts of attention has typically employed detection or localization button-pressing tasks. Because in-depth analyses of the spatiotemporal characteristics of aiming movements can provide additional insights into the dynamics of the processing of stimuli, in the present study we used a reaching paradigm to further explore the processing of social cues. In Experiments 1 and 2, participants aimed to a left or right location after a nonpredictive eye gaze cue toward one of these target locations. Seven stimulus onset asynchronies (SOAs), from 100 to 2,400 ms, were used. Both the temporal (reaction time, RT) and spatial (initial movement angle, IMA) characteristics of the movements were analyzed. RTs were shorter for cued (gazed-at) than for uncued targets across most SOAs. There were, however, no statistical differences in IMAs between movements to cued and uncued targets, suggesting that action planning was not affected by the gaze cue. In Experiment 3, the social cue was a finger pointing to one of the two target locations. Finger-pointing cues generated significant cueing effects in both RTs and IMAs. Overall, these results indicate that eye gaze and finger-pointing social cues are processed differently. Perception–action coupling (i.e., a tight link between the response and the social cue that is presented) might play roles in both the generation of action and the deviation of trajectories toward cued and uncued targets

    MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems

    Full text link
    Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.Comment: https://github.com/anuradhakar49/MLGaz
    • …
    corecore