46 research outputs found

    Focusing and orienting spatial attention differently modulate crowding in central and peripheral vision

    Get PDF
    The allocation of attentional resources to a particular location or object in space involves two distinct processes: an orienting process and a focusing process. Indeed, it has been demonstrated that performance of different visual tasks can be improved when a cue, such as a dot, anticipates the position of the target (orienting), or when its dimensions (as in the case of a small square) inform about the size of the attentional window (focusing). Here, we examine the role of these two components of visuo-spatial attention (orienting and focusing) in modulating crowding in peripheral (Experiment 1 and Experiment 3a) and foveal (Experiment 2 and Experiment 3b) vision. The task required to discriminate the orientation of a target letter "T,'' close to acuity threshold, presented with left and right "H'' flankers, as a function of target-flanker distance. Three cue types have been used: a red dot, a small square, and a big square. In peripheral vision (Experiment 1 and Experiment 3a), we found a significant improvement with the red dot and no advantage when a small square was used as a cue. In central vision (Experiment 2 and Experiment 3b), only the small square significantly improved participants' performance, reducing the critical distance needed to recover target identification. Taken together, the results indicate a behavioral dissociation of orienting and focusing attention in their capability of modulating crowding. In particular, we confirmed that orientation of attention can modulate crowding in visual periphery, while we found that focal attention can modulate foveal crowdin

    Predicting complexity perception of real world images

    Get PDF
    The aim of this work is to predict the complexity perception of real world images.We propose a new complexity measure where different image features, based on spatial, frequency and color properties are linearly combined. In order to find the optimal set of weighting coefficients we have applied a Particle Swarm Optimization. The optimal linear combination is the one that best fits the subjective data obtained in an experiment where observers evaluate the complexity of real world scenes on a web-based interface. To test the proposed complexity measure we have performed a second experiment on a different database of real world scenes, where the linear combination previously obtained is correlated with the new subjective data. Our complexity measure outperforms not only each single visual feature but also two visual clutter measures frequently used in the literature to predict image complexity. To analyze the usefulness of our proposal, we have also considered two different sets of stimuli composed of real texture images. Tuning the parameters of our measure for this kind of stimuli, we have obtained a linear combination that still outperforms the single measures. In conclusion our measure, properly tuned, can predict complexity perception of different kind of images

    Predicting complexity perception of real world images

    Get PDF
    The aim of this work is to predict the complexity perception of real world images.We propose a new complexity measure where different image features, based on spatial, frequency and color properties are linearly combined. In order to find the optimal set of weighting coefficients we have applied a Particle Swarm Optimization. The optimal linear combination is the one that best fits the subjective data obtained in an experiment where observers evaluate the complexity of real world scenes on a web-based interface. To test the proposed complexity measure we have performed a second experiment on a different database of real world scenes, where the linear combination previously obtained is correlated with the new subjective data. Our complexity measure outperforms not only each single visual feature but also two visual clutter measures frequently used in the literature to predict image complexity. To analyze the usefulness of our proposal, we have also considered two different sets of stimuli composed of real texture images. Tuning the parameters of our measure for this kind of stimuli, we have obtained a linear combination that still outperforms the single measures. In conclusion our measure, properly tuned, can predict complexity perception of different kind of images

    On the representation of novel objects : human psychophysics, monkey physiology and computational models

    No full text
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1996.Includes bibliographical references (p. 139-150).by Emanuela Bricolo.Ph.D

    Object Representation In Visual Systems. A Multidisciplinary Approach

    No full text
    Previous Work: The first question in the search for a representation is the characterization of the reference frame in which the object is coded. Is it object-centered (like most artificial systems use today) or is it a collection of object views? Psychophysical experiments that have used as stimuli a variety of novel object in a recognition task have shown that in this case the reference frame used by the human visual system is viewcentered [1, 4]. If subjects learn an object only from a single viewpoint, their ability in recognizing different views of the same object varies with the distance from the learned view. This psychophysical result has been reinforced by neurophysiological findings. Recordings in IT cortex of monkeys that were performing the same recognition task as humans show cells tuned to specific views of the learned object. Decay rate for the cells are similar to the decay of the performance rates [2]. An object could be therefore be thought as represented by

    Object Representation In Inferior Temporal Cortex

    No full text
    ent regions of IT while monkeys were performing a face discrimination task. The underlying assumption if that the ensemble of cells codes the set of stimuli, where each stimulus is represented as a high dimensional vector with each cell coding one dimension. MDS produces a lower dimensional representation maintaining as much as possible the distances between the representations of the different stimuli. If this reduced representation still maintains most of the variance of the original one the population code is considered redundant. Young and Yamane found that the two dimensional MDS solution could explain up to 75% of the variance, thus implying that IT cortex uses a sparse population code to represent faces[9] . Approach: We hypothesize that object recognition and object classification, the putative functions of inferotemporal cortex, depend on the similarity of the neural representation of a presented stimulus to the neural representations of known objects or obje

    3D Object Recognition: A Model of View-Tuned Neurons

    No full text
    In 1990 Poggio and Edelman proposed a view-based model of object recognition that accounts for several psychophysical properties of certain recognition tasks. The model predicted the existence of view-tuned and view-invariant units, that were later found by Logothetis et al. (Logothetis et al., 1995) in IT cortex of monkeys trained with views of specific paperclip objects. The model, however, does not specify the inputs to the view-tuned units and their internal organization. In this paper we propose a model of these view-tuned units that is consistent with physiological data from single cell responses. 1 INTRODUCTION Recognition of specific objects, such as recognition of a particular face, can be based on representations that are object centered, such as 3D structural models. Alternatively, a 3D object may be represented for the purpose of recognition in terms of a set of views. This latter class of models is biologically attractive because model acquisition -- the learning phase --..

    Inference-driven attention in symbolic and perceptual tasks: Biases toward expected and unexpected inputs

    No full text
    Cherubini P, Burigo M, Bricolo E. Inference-driven attention in symbolic and perceptual tasks: Biases toward expected and unexpected inputs. The Quarterly Journal of Experimental Psychology. 2006;59(3):597-624

    Hemispheric metacontrol and cerebral dominance in healthy individuals investigated by means of chimeric faces

    No full text
    Cerebral dominance and hemispheric metacontrol were investigated by testing the ability of healthy participants to match chimeric, entire, or half faces presented tachistoscopically. The two hemi-faces compounding chimeric or entire stimuli were presented simultaneously or asynchronously at different exposure times. Participants did not consciously detect chimeric faces for simultaneous presentations lasting up to 40 ms. Interestingly, a 20 ms separation between each half-chimera was sufficient to induce detection of conflicts at a conscious level. Although the presence of chimeric faces was not consciously perceived, performance on chimeric faces was poorer than on entire- and half-faces stimuli, thus indicating an implicit processing of perceptual conflicts. Moreover, the precedence of hemispheric stimulation over-ruled the right hemisphere dominance for face processing, insofar as the hemisphere stimulated last appeared to influence the response. This dynamic reversal of cerebral dominance, however, was not caused by a shift in hemispheric specialization, since the level of performance always reflected the right hemisphere specialization for face recognition. Thus, the dissociation between hemispheric dominance and specialization found in the present study hints at the existence of hemispheric metacontrol in healthy individuals. (c) 2005 Elsevier B.V. All rights reserved
    corecore