86 research outputs found

    Navigating the narrative : An eye-tracking study of readers' strategies when Reading comic page layouts

    Get PDF
    ACKNOWLEDGMENTS The authors wish to acknowledge the work of Elliot Balson, Yiannis Giagis, Rossi Gifford, Damon Herd, Cletus Jacobs, Norrie Millar, Gary Walsh and Letty Wilson as the artists and writers of the comics used in this study. Research Funding Economic and Social Research Council. Grant Number: ESRC (ES/M007081/1)Peer reviewedPublisher PD

    Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention

    Get PDF
    Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention

    Measures and Limits of Models of Fixation Selection

    Get PDF
    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection . We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced

    Markov models for ocular fixation locations in the presence and absence of colour

    Get PDF
    In response to the 2015 Royal Statistical Society's statistical analytics challenge, we propose to model the fixation locations of the human eye when observing a still image by a Markov point process in R2_{2}. Our approach is data driven using k\textit{k}-means clustering of the fixation locations to identify distinct salient regions of the image, which in turn correspond to the states of our Markov chain. Bayes factors are computed as the model selection criterion to determine the number of clusters. Furthermore, we demonstrate that the behaviour of the human eye differs from this model when colour information is removed from the given image.This work was supported by UK Engineering and Physical Sciences Research Council grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, Cambridge Centre for Analysis

    Object Detection Through Exploration With A Foveated Visual Field

    Get PDF
    We present a foveated object detector (FOD) as a biologically-inspired alternative to the sliding window (SW) approach which is the dominant method of search in computer vision object detection. Similar to the human visual system, the FOD has higher resolution at the fovea and lower resolution at the visual periphery. Consequently, more computational resources are allocated at the fovea and relatively fewer at the periphery. The FOD processes the entire scene, uses retino-specific object detection classifiers to guide eye movements, aligns its fovea with regions of interest in the input image and integrates observations across multiple fixations. Our approach combines modern object detectors from computer vision with a recent model of peripheral pooling regions found at the V1 layer of the human visual system. We assessed various eye movement strategies on the PASCAL VOC 2007 dataset and show that the FOD performs on par with the SW detector while bringing significant computational cost savings.Comment: An extended version of this manuscript was published in PLOS Computational Biology (October 2017) at https://doi.org/10.1371/journal.pcbi.100574

    Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry

    Get PDF
    Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation

    Oculomotor Evidence for Top-Down Control following the Initial Saccade

    Get PDF
    The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter

    Coding Efficiency of Fly Motion Processing Is Set by Firing Rate, Not Firing Precision

    Get PDF
    To comprehend the principles underlying sensory information processing, it is important to understand how the nervous system deals with various sources of perturbation. Here, we analyze how the representation of motion information in the fly's nervous system changes with temperature and luminance. Although these two environmental variables have a considerable impact on the fly's nervous system, they do not impede the fly to behave suitably over a wide range of conditions. We recorded responses from a motion-sensitive neuron, the H1-cell, to a time-varying stimulus at many different combinations of temperature and luminance. We found that the mean firing rate, but not firing precision, changes with temperature, while both were affected by mean luminance. Because we also found that information rate and coding efficiency are mainly set by the mean firing rate, our results suggest that, in the face of environmental perturbations, the coding efficiency is improved by an increase in the mean firing rate, rather than by an increased firing precision

    Shorter spontaneous fixation durations in infants with later emerging autism

    Get PDF
    Little is known about how spontaneous attentional deployment differs on a millisecond-level scale in the early development of autism spectrum disorders (ASD). We measured fine-grained eye movement patterns in 6-to 9-month-old infants at high or low familial risk (HR/LR) of ASD while they viewed static images. We observed shorter fixation durations (i.e. the time interval between saccades) in HR than LR infants. Preliminary analyses indicate that these results were replicated in a second cohort of infants. Fixation durations were shortest in those infants who went on to receive an ASD diagnosis at 36 months. While these findings demonstrate early-developing atypicality in fine-grained measures of attentional deployment early in the etiology of ASD, the specificity of these effects to ASD remains to be determined
    corecore