52,287 research outputs found

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    NĂ€gemistaju automaatsete protsesside eksperimentaalne uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVĂ€itekiri keskendub nĂ€gemistaju protsesside eksperimentaalsele uurimisele, mis on suuremal vĂ”i vĂ€hemal mÀÀral automaatsed. Uurimistöös on kasutatud erinevaid eksperimentaalseid katseparadigmasid ja katsestiimuleid ning nii kĂ€itumuslikke- kui ka ajukuvamismeetodeid. Esimesed kolm empiirilist uurimust kĂ€sitlevad liikumisinformatsiooni töötlust, mis on evolutsiooni kĂ€igus kujunenud ĂŒheks olulisemaks baasprotsessiks nĂ€gemistajus. Esmalt huvitas meid, kuidas avastatakse liikuva objekti suunamuutusi, kui samal ajal toimub ka taustal liikumine (Uurimus I). NĂ€gemistaju uurijad on pikka aega arvanud, et liikumist arvutatakse alati mĂ”ne vĂ€lise objekti vĂ”i tausta suhtes. Meie uurimistulemused ei kinnitanud taolise suhtelise liikumise printsiibi paikapidavust ning toetavad pigem seisukohta, et eesmĂ€rkobjekti liikumisinformatsiooni töötlus on automaatne protsess, mis tuvastab silma pĂ”hjas toimuvaid nihkeid, ja taustal toimuv seda eriti ei mĂ”juta. Teise uurimuse tulemused (Uurimus II) nĂ€itasid, et nĂ€gemissĂŒsteem töötleb vĂ€ga edukalt ka seda liikumisinformatsiooni, millele vaatleja teadlikult tĂ€helepanu ei pööra. See tĂ€hendab, et samal ajal, kui inimene on mĂ”ne tĂ€helepanu hĂ”lmava tegevusega ametis, suudab tema aju taustal toimuvaid sĂŒndmusi automaatselt registreerida. IgapĂ€evaselt on inimese nĂ€gemisvĂ€ljas alati palju erinevaid objekte, millel on erinevad omadused, mistĂ”ttu jĂ€rgmiseks huvitas meid (Uurimus III), kuidas ĂŒhe tunnuse (antud juhul vĂ€rvimuutuse) töötlemist mĂ”jutab mĂ”ne teise tunnusega toimuv (antud juhul liikumiskiiruse) muutus. NĂ€itasime, et objekti liikumine parandas sama objekti vĂ€rvimuutuse avastamist, mis viitab, et nende kahe omaduse töötlemine ajus ei ole pĂ€ris eraldiseisev protsess. Samuti tĂ€hendab taoline tulemus, et hoolimata ĂŒhele tunnusele keskendumisest ei suuda inimene ignoreerida teist tĂ€helepanu tĂ”mbavat tunnust (liikumine), mis viitab taas kord automaatsetele töötlusprotsessidele. Neljas uurimus keskendus emotsionaalsete nĂ€ovĂ€ljenduste töötlusele, kuna need kannavad keskkonnas hakkamasaamiseks vajalikke sotsiaalseid signaale, mistĂ”ttu on alust arvata, et nende töötlus on kujunenud suuresti automaatseks protsessiks. NĂ€itasime, et emotsiooni vĂ€ljendavaid nĂ€gusid avastati kiiremini ja kergemini kui neutraalse ilmega nĂ€gusid ning et vihane nĂ€gu tĂ”mbas rohkem tĂ€helepanu kui rÔÔmus (Uurimus IV). VĂ€itekirja viimane osa puudutab visuaalset lahknevusnegatiivsust (ingl Visual Mismatch Negativity ehk vMMN), mis nĂ€itab aju vĂ”imet avastada automaatselt erinevusi enda loodud mudelist ĂŒmbritseva keskkonna kohta. Selle automaatse erinevuse avastamise mehhanismi uurimisse andsid oma panuse nii Uurimus II kui Uurimus IV, mis mĂ”lemad pakuvad vĂ€lja tĂ”endusi vMMN tekkimise kohta eri tingimustel ja katseparadigmades ning ka vajalikke metodoloogilisi tĂ€iendusi. Uurimus V on esimene kogu siiani ilmunud temaatilist teadustööd hĂ”lmav ĂŒlevaateartikkel ja metaanalĂŒĂŒs visuaalsest lahknevusnegatiivsusest psĂŒhhiaatriliste ja neuroloogiliste haiguste korral, mis panustab oluliselt visuaalse lahknevusnegatiivsuse valdkonna arengusse.The research presented and discussed in the thesis is an experimental exploration of processes in visual perception, which all display a considerable amount of automaticity. These processes are targeted from different angles using different experimental paradigms and stimuli, and by measuring both behavioural and brain responses. In the first three empirical studies, the focus is on motion detection that is regarded one of the most basic processes shaped by evolution. Study I investigated how motion information of an object is processed in the presence of background motion. Although it is widely believed that no motion can be perceived without establishing a frame of reference with other objects or motion on the background, our results found no support for relative motion principle. This finding speaks in favour of a simple and automatic process of detecting motion, which is largely insensitive to the surrounding context. Study II shows that the visual system is built to automatically process motion information that is outside of our attentional focus. This means that even if we are concentrating on some task, our brain constantly monitors the surrounding environment. Study III addressed the question of what happens when multiple stimulus qualities (motion and colour) are present and varied, which is the everyday reality of our visual input. We showed that velocity facilitated the detection of colour changes, which suggests that processing motion and colour is not entirely isolated. These results also indicate that it is hard to ignore motion information, and processing it is rather automatically initiated. The fourth empirical study focusses on another example of visual input that is processed in a rather automatic way and carries high survival value – emotional expressions. In Study IV, participants detected emotional facial expressions faster and more easily compared with neutral facial expressions, with a tendency towards more automatic attention to angry faces. In addition, we investigated the emergence of visual mismatch negativity (vMMN) that is one of the most objective and efficient methods for analysing automatic processes in the brain. Study II and Study IV proposed several methodological gains for registering this automatic change-detection mechanism. Study V is an important contribution to the vMMN research field as it is the first comprehensive review and meta-analysis of the vMMN studies in psychiatric and neurological disorders

    Weakly Supervised Action Localization by Sparse Temporal Pooling Network

    Full text link
    We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm learns from video-level class labels and predicts temporal intervals of human actions with no requirement of temporal localization annotations. We design our network to identify a sparse subset of key segments associated with target actions in a video using an attention module and fuse the key segments through adaptive temporal pooling. Our loss function is comprised of two terms that minimize the video-level action classification error and enforce the sparsity of the segment selection. At inference time, we extract and score temporal proposals using temporal class activations and class-agnostic attentions to estimate the time intervals that correspond to target actions. The proposed algorithm attains state-of-the-art results on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with its weak supervision.Comment: Accepted to CVPR 201

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    The iso-response method

    Get PDF
    Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding
    • 

    corecore