88 research outputs found

    Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    Get PDF
    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively

    Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention

    Get PDF
    Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention

    A theoretical model of inflammation- and mechanotransduction- driven asthmatic airway remodelling

    Get PDF
    Inflammation, airway hyper-responsiveness and airway remodelling are well-established hallmarks of asthma, but their inter-relationships remain elusive. In order to obtain a better understanding of their inter-dependence, we develop a mechanochemical morphoelastic model of the airway wall accounting for local volume changes in airway smooth muscle (ASM) and extracellular matrix in response to transient inflammatory or contractile agonist challenges. We use constrained mixture theory, together with a multiplicative decomposition of growth from the elastic deformation, to model the airway wall as a nonlinear fibre-reinforced elastic cylinder. Local contractile agonist drives ASM cell contraction, generating mechanical stresses in the tissue that drive further release of mitogenic mediators and contractile agonists via underlying mechanotransductive signalling pathways. Our model predictions are consistent with previously described inflammation-induced remodelling within an axisymmetric airway geometry. Additionally, our simulations reveal novel mechanotransductive feedback by which hyper-responsive airways exhibit increased remodelling, for example, via stress-induced release of pro-mitogenic and procontractile cytokines. Simulation results also reveal emergence of a persistent contractile tone observed in asthmatics, via either a pathological mechanotransductive feedback loop, a failure to clear agonists from the tissue, or a combination of both. Furthermore, we identify various parameter combinations that may contribute to the existence of different asthma phenotypes, and we illustrate a combination of factors which may predispose severe asthmatics to fatal bronchospasms

    Does oculomotor inhibition of return influence fixation probability during scene search?

    Get PDF
    Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This “foraging” hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search

    Loss of ELK1 has differential effects on age-dependent organ fibrosis

    Get PDF
    ETS domain-containing protein-1 (ELK1) is a transcription factor important in regulating αvβ6 integrin expression. αvβ6 integrins activate the profibrotic cytokine Transforming Growth Factor β1 (TGFβ1) and are increased in the alveolar epithelium in idiopathic pulmonary fibrosis (IPF). IPF is a disease associated with aging and therefore we hypothesised that aged animals lacking Elk1 globally would develop spontaneous fibrosis in organs where αvβ6 mediated TGFβ activation has been implicated. Here we identify that Elk1-knockout (Elk1−/0) mice aged to one year developed spontaneous fibrosis in the absence of injury in both the lung and the liver but not in the heart or kidneys. The lungs of Elk1−/0 aged mice demonstrated increased collagen deposition, in particular collagen 3α1, located in small fibrotic foci and thickened alveolar walls. Despite the liver having relatively low global levels of ELK1 expression, Elk1−/0 animals developed hepatosteatosis and fibrosis. The loss of Elk1 also had differential effects on Itgb1, Itgb5 and Itgb6 expression in the four organs potentially explaining the phenotypic differences in these organs. To understand the potential causes of reduced ELK1 in human disease we exposed human lung epithelial cells and murine lung slices to cigarette smoke extract, which lead to reduced ELK1 expression andmay explain the loss of ELK1 in human disease. These data support a fundamental role for ELK1 in protecting against the development of progressive fibrosis via transcriptional regulation of beta integrin subunit genes, and demonstrate that loss of ELK1 can be caused by cigarette smoke

    When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    Get PDF
    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, na \u308\u131ve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception

    Irregularity-based image regions saliency identification and evaluation

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The publisher's final version of record can be found by following the DOI.Saliency or Salient regions extraction form images is still a challenging field since it needs some understanding for the image and the nature of the image. The technique that is suitable in some application is not necessarily useful in other application, thus, saliency enhancement is application oriented. In this paper, a new technique of extracting the salient regions from an image is proposed which utilizes the local features of the surrounding region of the pixels. The level of saliency is then decided based on the global comparison of the saliency-enhanced image. To make the process fully automatic a new Fuzzy-Based thresholding technique has been proposed also. The paper contains a survey of the state-of-the-art methods of saliency evaluation and a new saliency evaluation technique was proposed

    Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language

    Get PDF
    The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent— that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features)

    Scenes, saliency maps and scanpaths

    Get PDF
    The aim of this chapter is to review some of the key research investigating how people look at pictures. In particular, my goal is to provide theoretical background for those that are new to the field, while also explaining some of the relevant methods and analyses. I begin by introducing eye movements in the context of natural scene perception. As in other complex tasks, eye movements provide a measure of attention and information processing over time, and they tell us about how the foveated visual system determines what to prioritise. I then describe some of the many measures which have been derived to summarize where people look in complex images. These include global measures, analyses based on regions of interest and comparisons based on heat maps. A particularly popular approach for trying to explain fixation locations is the saliency map approach, and the first half of the chapter is mostly devoted to this topic. A large number of papers and models are built on this approach, but it is also worth spending time on this topic because the methods involved have been used across a wide range of applications. The saliency map approach is based on the fact that the visual system has topographic maps of visual features, that contrast within these features seems to be represented and prioritized, and that a central representation can be used to control attention and eye movements. This approach, and the underlying principles, has led to an increase in the number of researchers using complex natural scenes as stimuli. It is therefore important that those new to the field are familiar with saliency maps, their usage, and their pitfalls. I describe the original implementation of this approach (Itti & Koch, 2000), which uses spatial filtering at different levels of coarseness and combines them in an attempt to identify the regions which stand out from their background. Evaluating this model requires comparing fixation locations to model predictions. Several different experimental and comparison methods have been used, but most recent research shows that bottom-up guidance is rather limited in terms of predicting real eye movements. The second part of the chapter is largely concerned with measuring eye movement scanpaths. Scanpaths are the sequential patterns of fixations and saccades made when looking at something for a period of time. They show regularities which may reflect top-down attention, and some have attempted to link these to memory and an individual’s mental model of what they are looking at. While not all researchers will be testing hypotheses about scanpaths, an understanding of the underlying methods and theory will be of benefit to all. I describe the theories behind analyzing eye movements in this way, and various methods which have been used to represent and compare them. These methods allow one to quantify the similarity between two viewing patterns, and this similarity is linked to both the image and the observer. The last part of the chapter describes some applications of eye movements in image viewing. The methods discussed can be applied to complex images, and therefore these experiments can tell us about perception in art and marketing, as well as about machine vision

    Fixation durations in scene viewing:Modeling the effects of local image features, oculomotor parameters, and task

    Get PDF
    Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation’s duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general
    corecore