299 research outputs found

    Semantic Override of Low-level Features in Image Viewing – Both Initially and Overall

    Get PDF
    Guidance of eye-movements in image viewing is believed to be controlled by stimulus driven factors as well as viewer dependent higher level factors such as task and memory. It is currently debated what proportions these factors contribute to gaze guidance, and also how they vary over time after image onset. Overall, the unanimity regarding these issues is surprisingly low and there are results supporting both types of factors as being dominant in eye-movement control under certain conditions. We investigate how low, and high level factors inïŹ‚uence eye guidance by manipulating contrast statistics on images from three different semantic categories and measure how this affects fixation selection. Our results show that the degree to which contrast manipulations affect fixation selection heavily depends on an image’s semantic content, and how this content is distributed over the image. Over the three image categories, we found no systematic differences between contrast and edge density at fixated location compared to control locations, neither during the initial fixation nor over the whole time course of viewing. These results suggest that cognitive factors easily can override low-level factors in fixation selection, even when the viewing task is neutral

    Keeping an eye on gestures: Visual perception of gestures in face-to-face communication

    Get PDF
    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related

    What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

    Get PDF
    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing

    Sampling frequency and eye-tracking measures: how speed affects durations, latencies, and more

    Get PDF
    We use simulations to investigate the effect of sampling frequency on common dependent variables in eye-tracking. We identify two large groups of measures that behave differently, but consistently. The effect of sampling frequency on these two groups of measures are explored and simulations are performed to estimate how much data are required to overcome the uncertainty of a limited sampling frequency. Both simulated and real data are used to estimate the temporal uncertainty of data produced by low sampling frequencies. The aim is to provide easy-to-use heuristics for researchers using eye-tracking. For example, we show how to compensate the uncertainty of a low sampling frequency with more data and postexperiment adjustments of measures. These findings have implications primarily for researchers using naturalistic setups where sampling frequencies typically are low

    Eye tracking in Educational Science: Theoretical frameworks and research agendas

    Get PDF
    Eye tracking is increasingly being used in Educational Science and so has the interest of the eye tracking community grown in this topic. In this paper we briefly introduce the discipline of Educational Science and why it might be interesting to couple it with eye tracking research. We then introduce three major research areas in Educational Science that have already successfully used eye tracking: First, eye tracking has been used to improve the instructional design of computer-based learning and testing environments, often using hyper- or multimedia. Second, eye tracking has shed light on expertise and its development in visual domains, such as chess or medicine. Third, eye tracking has recently been also used to promote visual expertise by means of eye movement modeling examples. We outline the main educational theories for these research areas and indicate where further eye tracking research is needed to expand them

    A vector-based, multidimensional scanpath similarity measure

    Get PDF
    Jarodzka, H., Holmqvist, K., & Nyström, M. (2010). A vector-based, multidimensional scanpath similarity measure. In C. Morimoto & H. Instance (Eds.), Proceedings of the 2010 Symposium on Eye Tracking Research & Applications ETRA ’10 (pp. 211-218). New York, NY: ACM.A great need exists in many ïŹelds of eye-tracking research for a robust and general method for scanpath comparisons. Current mea sures either quantize scanpaths in space (string editing measures like the Levenshtein distance) or in time (measures based on attention maps). This paper proposes a new pairwise scanpath similarity measure. Unlike previous measures that either use AOI sequences or forgo temporal order, the new measure deïŹnes scanpaths as a series of geometric vectors and compares temporally aligned scanpaths across several dimensions: shape, ïŹxation position, length, direction, and ïŹxation duration. This approach offers more multifaceted insights to how similar two scanpaths are. Eight ïŹctitious scanpath pairs are tested to elucidate the strengths of the new measure, both in itself and compared to two of the currently most popular measures - the Levenshtein distance and attention map corre- lation

    Children’s attention to online adverts is related to low-level saliency factors and individual level of gaze control

    Get PDF
    Twenty-six children in 3rd grade were observed while surfing freely on their favourite websites. Eye movement data were recorded, as well as synchronized screen recordings. Each online advert was analyzed in order to quantify low-level saliency features, such as motion, luminance and edge density. The eye movement data were used to register if the children had attended to the online adverts. A mixed-effects multiple regression analysis was performed in order to test the relationship between visual attention on adverts and advert saliency features. The regression model also included individual level of gaze control and level of internet use as predictors. The results show that all measures of visual saliency had effects on children’s visual attention, but these effects were modulated by children’s individual level of gaze control

    The mean point of vergence is biased under projection

    Get PDF
    The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment

    Using Eye Tracking to Trace a Cognitive Process: Gaze Behaviour During Decision Making in a Natural Environment

    Get PDF
    The visual behaviour of consumers buying (or searching for) products in a supermarket was measured and used to analyse the stages of their decision process. Traditionally metrics used to trace decision-making processes are difficult to use in natural environments that often contain many options and unstructured information. Unlike previous attempts in this direction (i.e. Russo & Leclerc, 1994), our methodology reveals differences between a decision-making task and a search task. In particular the second (evaluation) stage of a decision task contains more re-dwells than the second stage of a comparable search task. This study addresses the growing concern of taking eye movement research from the laboratory into the ‘real-world’, so findings can be better generalised to natural situations
    • 

    corecore