4,527 research outputs found

    Focusing and orienting spatial attention differently modulate crowding in central and peripheral vision

    Get PDF
    The allocation of attentional resources to a particular location or object in space involves two distinct processes: an orienting process and a focusing process. Indeed, it has been demonstrated that performance of different visual tasks can be improved when a cue, such as a dot, anticipates the position of the target (orienting), or when its dimensions (as in the case of a small square) inform about the size of the attentional window (focusing). Here, we examine the role of these two components of visuo-spatial attention (orienting and focusing) in modulating crowding in peripheral (Experiment 1 and Experiment 3a) and foveal (Experiment 2 and Experiment 3b) vision. The task required to discriminate the orientation of a target letter "T,'' close to acuity threshold, presented with left and right "H'' flankers, as a function of target-flanker distance. Three cue types have been used: a red dot, a small square, and a big square. In peripheral vision (Experiment 1 and Experiment 3a), we found a significant improvement with the red dot and no advantage when a small square was used as a cue. In central vision (Experiment 2 and Experiment 3b), only the small square significantly improved participants' performance, reducing the critical distance needed to recover target identification. Taken together, the results indicate a behavioral dissociation of orienting and focusing attention in their capability of modulating crowding. In particular, we confirmed that orientation of attention can modulate crowding in visual periphery, while we found that focal attention can modulate foveal crowdin

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding

    NĂ€gemistaju automaatsete protsesside eksperimentaalne uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVĂ€itekiri keskendub nĂ€gemistaju protsesside eksperimentaalsele uurimisele, mis on suuremal vĂ”i vĂ€hemal mÀÀral automaatsed. Uurimistöös on kasutatud erinevaid eksperimentaalseid katseparadigmasid ja katsestiimuleid ning nii kĂ€itumuslikke- kui ka ajukuvamismeetodeid. Esimesed kolm empiirilist uurimust kĂ€sitlevad liikumisinformatsiooni töötlust, mis on evolutsiooni kĂ€igus kujunenud ĂŒheks olulisemaks baasprotsessiks nĂ€gemistajus. Esmalt huvitas meid, kuidas avastatakse liikuva objekti suunamuutusi, kui samal ajal toimub ka taustal liikumine (Uurimus I). NĂ€gemistaju uurijad on pikka aega arvanud, et liikumist arvutatakse alati mĂ”ne vĂ€lise objekti vĂ”i tausta suhtes. Meie uurimistulemused ei kinnitanud taolise suhtelise liikumise printsiibi paikapidavust ning toetavad pigem seisukohta, et eesmĂ€rkobjekti liikumisinformatsiooni töötlus on automaatne protsess, mis tuvastab silma pĂ”hjas toimuvaid nihkeid, ja taustal toimuv seda eriti ei mĂ”juta. Teise uurimuse tulemused (Uurimus II) nĂ€itasid, et nĂ€gemissĂŒsteem töötleb vĂ€ga edukalt ka seda liikumisinformatsiooni, millele vaatleja teadlikult tĂ€helepanu ei pööra. See tĂ€hendab, et samal ajal, kui inimene on mĂ”ne tĂ€helepanu hĂ”lmava tegevusega ametis, suudab tema aju taustal toimuvaid sĂŒndmusi automaatselt registreerida. IgapĂ€evaselt on inimese nĂ€gemisvĂ€ljas alati palju erinevaid objekte, millel on erinevad omadused, mistĂ”ttu jĂ€rgmiseks huvitas meid (Uurimus III), kuidas ĂŒhe tunnuse (antud juhul vĂ€rvimuutuse) töötlemist mĂ”jutab mĂ”ne teise tunnusega toimuv (antud juhul liikumiskiiruse) muutus. NĂ€itasime, et objekti liikumine parandas sama objekti vĂ€rvimuutuse avastamist, mis viitab, et nende kahe omaduse töötlemine ajus ei ole pĂ€ris eraldiseisev protsess. Samuti tĂ€hendab taoline tulemus, et hoolimata ĂŒhele tunnusele keskendumisest ei suuda inimene ignoreerida teist tĂ€helepanu tĂ”mbavat tunnust (liikumine), mis viitab taas kord automaatsetele töötlusprotsessidele. Neljas uurimus keskendus emotsionaalsete nĂ€ovĂ€ljenduste töötlusele, kuna need kannavad keskkonnas hakkamasaamiseks vajalikke sotsiaalseid signaale, mistĂ”ttu on alust arvata, et nende töötlus on kujunenud suuresti automaatseks protsessiks. NĂ€itasime, et emotsiooni vĂ€ljendavaid nĂ€gusid avastati kiiremini ja kergemini kui neutraalse ilmega nĂ€gusid ning et vihane nĂ€gu tĂ”mbas rohkem tĂ€helepanu kui rÔÔmus (Uurimus IV). VĂ€itekirja viimane osa puudutab visuaalset lahknevusnegatiivsust (ingl Visual Mismatch Negativity ehk vMMN), mis nĂ€itab aju vĂ”imet avastada automaatselt erinevusi enda loodud mudelist ĂŒmbritseva keskkonna kohta. Selle automaatse erinevuse avastamise mehhanismi uurimisse andsid oma panuse nii Uurimus II kui Uurimus IV, mis mĂ”lemad pakuvad vĂ€lja tĂ”endusi vMMN tekkimise kohta eri tingimustel ja katseparadigmades ning ka vajalikke metodoloogilisi tĂ€iendusi. Uurimus V on esimene kogu siiani ilmunud temaatilist teadustööd hĂ”lmav ĂŒlevaateartikkel ja metaanalĂŒĂŒs visuaalsest lahknevusnegatiivsusest psĂŒhhiaatriliste ja neuroloogiliste haiguste korral, mis panustab oluliselt visuaalse lahknevusnegatiivsuse valdkonna arengusse.The research presented and discussed in the thesis is an experimental exploration of processes in visual perception, which all display a considerable amount of automaticity. These processes are targeted from different angles using different experimental paradigms and stimuli, and by measuring both behavioural and brain responses. In the first three empirical studies, the focus is on motion detection that is regarded one of the most basic processes shaped by evolution. Study I investigated how motion information of an object is processed in the presence of background motion. Although it is widely believed that no motion can be perceived without establishing a frame of reference with other objects or motion on the background, our results found no support for relative motion principle. This finding speaks in favour of a simple and automatic process of detecting motion, which is largely insensitive to the surrounding context. Study II shows that the visual system is built to automatically process motion information that is outside of our attentional focus. This means that even if we are concentrating on some task, our brain constantly monitors the surrounding environment. Study III addressed the question of what happens when multiple stimulus qualities (motion and colour) are present and varied, which is the everyday reality of our visual input. We showed that velocity facilitated the detection of colour changes, which suggests that processing motion and colour is not entirely isolated. These results also indicate that it is hard to ignore motion information, and processing it is rather automatically initiated. The fourth empirical study focusses on another example of visual input that is processed in a rather automatic way and carries high survival value – emotional expressions. In Study IV, participants detected emotional facial expressions faster and more easily compared with neutral facial expressions, with a tendency towards more automatic attention to angry faces. In addition, we investigated the emergence of visual mismatch negativity (vMMN) that is one of the most objective and efficient methods for analysing automatic processes in the brain. Study II and Study IV proposed several methodological gains for registering this automatic change-detection mechanism. Study V is an important contribution to the vMMN research field as it is the first comprehensive review and meta-analysis of the vMMN studies in psychiatric and neurological disorders

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A computer vision model for visual-object-based attention and eye movements

    Get PDF
    This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda- tion of Chin

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    Eye fixation during multiple object attention is based on a representation of discrete spatial foci

    Get PDF
    We often look at and attend to several objects at once. How the brain determines where to point our eyes when we do this is poorly understood. Here we devised a novel paradigm to discriminate between different models of spatial selection guiding fixation. In contrast to standard static attentional tasks where the eye remains fixed at a predefined location, observers selected their own preferred fixation position while they tracked static targets that were arranged in specific geometric configurations and which changed identity over time. Fixations were best predicted by a representation of discrete spatial foci, not a polygonal grouping, simple 2-foci division of attention or a circular spotlight. Moreover, attentional performance was incompatible with serial selection, suggesting that attentional selection and fixation share the same spatial representation. Together with previous findings on fixational microsaccades during covert attention, our results suggest a more nuanced definition of overt vs. covert attention.Publisher PDFPeer reviewe
    • 

    corecore