56,207 research outputs found

    Relating Eye-Tracking Measures With Changes In Knowledge on Search Tasks

    Full text link
    We conducted an eye-tracking study where 30 participants performed searches on the web. We measured their topical knowledge before and after each task. Their eye-fixations were labelled as "reading" or "scanning". The series of reading fixations in a line, called "reading-sequences" were characterized by their length in pixels, fixation duration, and the number of fixations making up the sequence. We hypothesize that differences in knowledge-change of participants are reflected in their eye-tracking measures related to reading. Our results show that the participants with higher change in knowledge differ significantly in terms of their total reading-sequence-length, reading-sequence-duration, and number of reading fixations, when compared to participants with lower knowledge-change.Comment: ACM Symposium on Eye Tracking Research and Applications (ETRA), June 14-17, 2018, Warsaw, Polan

    Pervasive and standalone computing: The perceptual effects of variable multimedia quality.

    Get PDF
    The introduction of multimedia on pervasive and mobile communication devices raises a number of perceptual quality issues, however, limited work has been done examining the 3-way interaction between use of equipment, quality of perception and quality of service. Our work measures levels of informational transfer (objective) and user satisfaction (subjective)when users are presented with multimedia video clips at three different frame rates, using four different display devices, simulating variation in participant mobility. Our results will show that variation in frame-rate does not impact a user’s level of information assimilation, however, does impact a users’ perception of multimedia video ‘quality’. Additionally, increased visual immersion can be used to increase transfer of video information, but can negatively affect the users’ perception of ‘quality’. Finally, we illustrate the significant affect of clip-content on the transfer of video, audio and textual information, placing into doubt the use of purely objective quality definitions when considering multimedia presentations

    Eye-movements in implicit artificial grammar learning

    Get PDF
    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests.Max Planck Institute for PsycholinguisticsDonders Institute for Brain, Cognition and BehaviorVetenskapsradetSwedish Dyslexia Foundatio

    Investigating eye movement acquisition and analysis technologies as a causal factor in differential prevalence of crossed and uncrossed fixation disparity during reading and dot scanning

    Get PDF
    Previous studies examining binocular coordination during reading have reported conflicting results in terms of the nature of disparity (e.g. Kliegl, Nuthmann, &amp; Engbert (Journal of Experimental Psychology General 135:12-35, 2006); Liversedge, White, Findlay, &amp; Rayner (Vision Research 46:2363-2374, 2006). One potential cause of this inconsistency is differences in acquisition devices and associated analysis technologies. We tested this by directly comparing binocular eye movement recordings made using SR Research EyeLink 1000 and the Fourward Technologies Inc. DPI binocular eye-tracking systems. Participants read sentences or scanned horizontal rows of dot strings; for each participant, half the data were recorded with the EyeLink, and the other half with the DPIs. The viewing conditions in both testing laboratories were set to be very similar. Monocular calibrations were used. The majority of fixations recorded using either system were aligned, although data from the EyeLink system showed greater disparity magnitudes. Critically, for unaligned fixations, the data from both systems showed a majority of uncrossed fixations. These results suggest that variability in previous reports of binocular fixation alignment is attributable to the specific viewing conditions associated with a particular experiment (variables such as luminance and viewing distance), rather than acquisition and analysis software and hardware.<br/

    Gaze Behaviour during Space Perception and Spatial Decision Making

    Get PDF
    A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screen-shots of choice points taken in large virtual environments. Each screen-shot depicted alternative path options. In Experiment 1, participants had to decide between them in order to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 & 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Pilots’ visual scan pattern and attention distribution during the pursuit of a dynamic target

    Get PDF
    Introduction: The current research is investigating pilots’ visual scan patterns in order to assess attention distribution during air-to-air manoeuvers. Method: A total of thirty qualified mission-ready fighter pilots participated in this research. Eye movement data were collected by a portable head-mounted eye-tracking device, combined with a jet fighter simulator. To complete the task, pilots have to search for, pursue, and lock-on a moving target whilst performing air-to-air tasks. Results: There were significant differences in pilots’ saccade duration (msec) in three operating phases including searching (M=241, SD=332), pursuing (M=311, SD=392), and lock-on (M=191, SD=226). Also, there were significant differences in pilots’ pupil sizes (pixel2) of which lock-on phase was the largest (M=27237, SD=6457), followed by pursuing (M=26232, SD=6070), then searching (M=25858, SD=6137). Furthermore, there were significant differences between expert and novice pilots on the percentage of fixation on the HUD, time spent looking outside the cockpit, and the performance of situational awareness (SA). Discussion: Experienced pilots have better SA performance and paid more attention to the HUD but focused less outside the cockpit when compared with novice pilots. Furthermore, pilots with better SA performance exhibited a smaller pupil size during the operational phase of lock-on whilst pursuing a dynamic target. Understanding pilots’ visual scan patterns and attention distribution are beneficial to the design of interface displays in the cockpit and in developing human factors training syllabi to improve safety of flight operations

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Objects predict fixations better than early saliency

    Get PDF
    Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as “saliency maps,” are often built on the assumption that “early” features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to “interesting” objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated
    corecore