2,004 research outputs found

    Using natural versus artificial stimuli to perform calibration for 3D gaze tracking

    No full text
    International audienceThe presented study tests which type of stereoscopic image, natural or artificial, is more adapted to perform efficient and reliable calibration in order to track the gaze of observers in 3D space using classical 2D eye tracker. We measured the horizontal disparities, i.e. the difference between the x coordinates of the two eyes obtained using a 2D eye tracker. This disparity was recorded for each observer and for several target positions he had to fixate. Target positions were equally distributed in the 3D space, some on the screen (with a null disparity), some behind the screen (uncrossed disparity) and others in front of the screen (crossed disparity). We tested different regression models (linear and non linear) to explain either the true disparity or the depth with the measured disparity. Models were tested and compared on their prediction error for new targets at new positions. First of all, we found that we obtained more reliable disparities measures when using natural stereoscopic images rather than artificial. Second, we found that overall a non-linear model was more efficient. Finally, we discuss the fact that our results were observer dependent, with variability's between the observer's behavior when looking at 3D stimuli. Because of this variability, we proposed to compute observer specific model to accurately predict their gaze position when exploring 3D stimuli

    Continuous Gaze Tracking With Implicit Saliency-Aware Calibration on Mobile Devices

    Full text link
    Gaze tracking is a useful human-to-computer interface, which plays an increasingly important role in a range of mobile applications. Gaze calibration is an indispensable component of gaze tracking, which transforms the eye coordinates to the screen coordinates. The existing approaches of gaze tracking either have limited accuracy or require the user's cooperation in calibration and in turn hurt the quality of experience. We in this paper propose vGaze, continuous gaze tracking with implicit saliency-aware calibration on mobile devices. The design of vGaze stems from our insight on the temporal and spatial dependent relation between the visual saliency and the user's gaze. vGaze is implemented as a light-weight software that identifies video frames with "useful" saliency information, sensing the user's head movement, performs opportunistic calibration using only those "useful" frames, and leverages historical information for accelerating saliency detection. We implement vGaze on a commercial mobile device and evaluate its performance in various scenarios. The results show that vGaze can work at real time with video playback applications. The average error of gaze tracking is 1.51 cm (2.884 degree) which decreases to 0.99 cm (1.891 degree) with historical information and 0.57 cm (1.089 degree) with an indicator

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Measuring gaze and pupil in the real world: object-based attention,3D eye tracking and applications

    Get PDF
    This dissertation contains studies on visual attention, as measured by gaze orientation, and the use of mobile eye-tracking and pupillometry in applications. It combines the development of methods for mobile eye-tracking (studies II and III) with experimental studies on gaze guidance and pupillary responses in patients (studies IV and VI) and healthy observers (studies I and V). Object based attention / Study I What is the main factor of fixation guidance in natural scenes? Low-level features or objects? We developed a fixation-predicting model, which regards preferred viewing locations (PVL) per object and combines these distributions over the entirety of existing objects in the scene. Object-based fixation predictions for natural scene viewing perform at par with the best early salience model, that are based on low-level features. However, when stimuli are manipulated so that low-level features and objects are dissociated, the greater prediction power of saliency models diminishes. Thus, we dare to claim, that highly developed saliency models implicitly obtain object-hood and that fixation selection is mainly influenced by objects and much less by low-level features. Consequently, attention guidance in natural scenes is object-based. 3D tracking / Study II The second study focussed on improving calibration procedures for eye-in-head positions with a mobile eye-tracker.We used a mobile eye-tracker prototype, the EyeSeeCam with a high video-oculography (VOG) sampling rate and the technical gadget to follow the users gaze direction instantaneously with a rotatable camera. For a better accuracy in eye-positioning, we explored a refinement in the implementation of the eye-in-head calibration that yields a measure for fixation distance, which led to a mobile eye-tracker 3D calibration. Additionally, by developing the analytical mechanics for parametrically reorienting the gaze-centred camera, the 3D calibration could be applied to reliably record gaze-centred videos. Such videos are suitable as stimuli for investigating gaze-behaviour during object manipulation or object recognition in real worlds point-of-view (PoV) perspective. In fact, the 3D calibration produces a higher accuracy in positioning the gaze-centred camera over the whole 3D visual range. Study III, eye-tracking methods With a further development on the EyeSeeCam we achieved to record gaze-in-world data, by superposing eye-in-head and head-in-world coordinates. This novel approach uses a combination of few absolute head-positions extracted manually from the PoV video and of relative head-shifts integrated over angular velocities and translational accelerations, both given by an inertia measurement unit (IMU) synchronized to the VOG data. Gaze-in-world data consist of room-referenced gaze directions and their origins within the environment. They easily allow to assign fixation targets by using a 3D model of the measuring environment – a strong rationalisation regarding fixation analysis. Applications Study III Daylight is an important perceptual factor for visual comfort, but can also create discomfort glare situations during office work, so we developed to measure its behavioural influences. We achieve to compare luminance distributions and fixations in a real-world setting, by also recording indoor luminance variations time-resolved using luminance maps of a scenery spanning over a 3pi sr. Luminance evaluations in the workplace environment yield a well controlled categorisation of different lighting conditions and a localisation as well as a brightness measure of glare sources.We used common tasks like reading, typing on a computer, a phone call and thinking about a subject. The 3D model gives the possibility to test for gaze distribution shifts in the presence of glare patches and for variations between lighting conditions. Here, a low contrast lighting condition with no sun inside and a high contrast lighting condition with direct sunlight inside were compared. When the participants are not engaged in any visually focused task and the presence of the task support is minimal, the dominant view directions are inclined towards the view outside the window under the low contrast lighting conditions, but this tendency is less apparent and sways more towards the inside of the room under the high contrast lighting condition. This result implicates an avoidance of glare sources in gaze behaviour. In a second more extensive series of experiments, the participants’ subjective assessments of the lighting conditions will be included. Thus, the influence of glare can be analysed in more detail and tested whether visual discomfort judgements are correlated in differences in gaze-behaviour. Study IV The advanced eye-tracker calibration found application in several following projects and included in this dissertation is an investigation with patients suffering either from idiopathic Parkinson’s disease or from progressive supranuclear palsy (PSP) syndrome. PSP’s key symptom is the decreased ability to carry out vertical saccades and thus the main diagnostic feature for differentiating between the two forms of Parkinson’s syndrome. By measuring ocular movements during a rapid (< 20s) procedure with a standardized fixation protocol, we could successfully differentiate pre-diagnosed patients between idiopathic Parkinson’s disease and PSP, thus between PSP patients and HCs too. In PSP patients, the EyeSeeCam detected prominent impairment of both saccade velocity and amplitude. Furthermore, we show the benefits of a mobile eye-tracking device for application in clinical practice. Study V Decision-making is one of the basic cognitive processes of human behaviours and thus, also evokes a pupil dilation. Since this dilation reflects a marker for the temporal occurrence of the decision, we wondered whether individuals can read decisions from another’s pupil and thus become a mentalist. For this purpose, a modified version of the rock-paper-scissors childhood game was played with 3 prototypical opponents, while their eyes were video taped. These videos served as stimuli for further persons, who competed in rock-paper-scissors. Our results show, that reading decisions from a competitor’s pupil can be achieved and players can raise their winning probability significantly above chance. This ability does not require training but the instruction, that the time of maximum pupil dilation was indicative of the opponent’s choice. Therefore we conclude, that people could use the pupil to detect cognitive decisions in another individual, if they get explicit knowledge of the pupil’s utility. Study VI For patients with severe motor disabilities, a robust mean of communication is a crucial factor for well-being. Locked-in-Syndrome (LiS) patients suffer from quadriplegia and lack the ability of articulating their voice, though their consciousness is fully intact. While classic and incomplete LiS allows at least voluntary vertical eye movements or blinks to be used for communication, total LiS patients are not able to perform such movements. What remains, are involuntarily evoked muscle reactions, like it is the case with the pupillary response. The pupil dilation reflects enhanced cognitive or emotional processing, which we successfully observed in LiS patients. Furthermore, we created a communication system based on yes-no questions combined with the task of solving arithmetic problems during matching answer intervals, that yet invokes the most solid pupil dilation usable on a trial-by-trial basis for decoding yes or no as answers. Applied to HCs and patients with various severe motor disabilities, we provide the proof of principle that pupil responses allow communication for all tested HCs and 4/7 typical LiS patients. Résumé Together, the methods established within this thesis are promising advances in measuring visual attention allocation with 3D eye-tracking in real world and in the use of pupillometry as on-line measurement of cognitive processes. The two most outstanding findings are the possibility to communicate with complete LiS patients and further a conclusive evidence that objects are the primary unit of fixation selection in natural scenes

    The mean point of vergence is biased under projection

    Get PDF
    The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment

    Attention and information acquisition: Comparison of mouse-click with eye-movement attention tracking

    Get PDF
    Attention is crucial as a fundamental prerequisite for perception. The measurement of attention in viewing and recognizing the images that surround us constitutes an important part of eye movement research, particularly in advertising-effectiveness research. Recording eye and gaze (i.e. eye and head) movements is considered the standard procedure for measuring attention. However, alternative measurement methods have been developed in recent years, one of which is mouse-click attention tracking (mcAT) by means of an on-line based procedure that measures gaze motion via a mouse-click (i.e. a hand and finger positioning maneuver) on a computer screen.Here we compared the validity of mcAT with eye movement attention tracking (emAT). We recorded data in a between subject design via emAT and mcAT and analyzed and compared 20 subjects for correlations. The test stimuli consisted of 64 images that were assigned to eight categories. Our main results demonstrated a highly significant correlation (p&lt;0.001) between mcAT and emAT data. We also found significant differences in correlations between different image categories. For simply structured pictures of humans or animals in particular, mcAT provided highly valid and more consistent results compared to emAT. We concluded that mcAT is a suitable method for measuring the attention we give to the images that surround us, such as photographs, graphics, art or digital and print advertisements

    Directing Attention in an Augmented Reality Environment: An Attentional Tunneling Evaluation

    Get PDF
    Augmented Reality applications use explicit cuing to support visual search. Explicit cues can help improve visual search performance but they can also cause perceptual issues such as attentional tunneling. An experiment was conducted to evaluate the relationship between directing attention and attentional tunneling, in a dual task structure. One task was tracking a target in motion and the other was detection of non-target elements. Three conditions were tested: baseline without cuing the target, cuing the target with the average scene color, and using a red cue. A different color for the cue was used to vary the attentional tunneling level. The results show that directing attention induced attentional tunneling only the in red condition and that effect is attributable to the color used for the cue
    • …
    corecore