434 research outputs found

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    Visual saliency and semantic incongruency influence eye movements when inspecting pictures

    Get PDF
    Models of low-level saliency predict that when we first look at a photograph our first few eye movements should be made towards visually conspicuous objects. Two experiments investigated this prediction by recording eye fixations while viewers inspected pictures of room interiors that contained objects with known saliency characteristics. Highly salient objects did attract fixations earlier than less conspicuous objects, but only in a task requiring general encoding of the whole picture. When participants were required to detect the presence of a small target, then the visual saliency of nontarget objects did not influence fixations. These results support modifications of the model that take the cognitive override of saliency into account by allowing task demands to reduce the saliency weights of task-irrelevant objects. The pictures sometimes contained incongruent objects that were taken from other rooms. These objects were used to test the hypothesis that previous reports of the early fixation of congruent objects have not been consistent because the effect depends upon the visual conspicuity of the incongruent object. There was an effect of incongruency in both experiments, with earlier fixation of objects that violated the gist of the scene, but the effect was only apparent for inconspicuous objects, which argues against the hypothesis

    Evidence against global attention filters selective for absolute bar-orientation in human vision

    Full text link
    The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task

    European Echinococcosis Registry: Human Alveolar Echinococcosis, Europe, 1982–2000

    Get PDF
    Surveillance for alveolar echinococcosis in central Europe was initiated in 1998. On a voluntary basis, 559 patients were reported to the registry. Most cases originated from rural communities in regions from eastern France to western Austria; single cases were reported far away from the disease-“endemic” zone throughout central Europe. Of 210 patients, 61.4% were involved in vocational or part-time farming, gardening, forestry, or hunting. Patients were diagnosed at a mean age of 52.5 years; 78% had symptoms. Alveolar echinococcosis primarily manifested as a liver disease. Of the 559 patients, 190 (34%) were already affected by spread of the parasitic larval tissue. Of 408 (73%) patients alive in 2000, 4.9% were cured. The increasing prevalence of Echinococcus multilocularis in foxes in rural and urban areas of central Europe and the occurrence of cases outside the alveolar echinococcosis–endemic regions suggest that this disease deserves increased attention

    Liposome-Coupled Antigens Are Internalized by Antigen-Presenting Cells via Pinocytosis and Cross-Presented to CD8+ T Cells

    Get PDF
    We have previously demonstrated that antigens chemically coupled to the surface of liposomes consisting of unsaturated fatty acids were cross-presented by antigen-presenting cells (APCs) to CD8+ T cells, and that this process resulted in the induction of antigen-specific cytotoxic T lymphocytes. In the present study, the mechanism by which the liposome-coupled antigens were cross-presented to CD8+ T cells by APCs was investigated. Confocal laser scanning microscopic analysis demonstrated that antigens coupled to the surface of unsaturated-fatty-acid-based liposomes received processing at both MHC class I and class II compartments, while most of the antigens coupled to the surface of saturated-fatty-acid-based liposomes received processing at the class II compartment. In addition, flow cytometric analysis demonstrated that antigens coupled to the surface of unsaturated-fatty-acid-liposomes were taken up by APCs even in a 4°C environment; this was not true of saturated-fatty-acid-liposomes. When two kinds of inhibitors, dimethylamiloride (DMA) and cytochalasin B, which inhibit pinocytosis and phagocytosis by APCs, respectively, were added to the culture of APCs prior to the antigen pulse, DMA but not cytochalasin B significantly reduced uptake of liposome-coupled antigens. Further analysis of intracellular trafficking of liposomal antigens using confocal laser scanning microscopy revealed that a portion of liposome-coupled antigens taken up by APCs were delivered to the lysosome compartment. In agreement with the reduction of antigen uptake by APCs, antigen presentation by APCs was significantly inhibited by DMA, and resulted in the reduction of IFN-γ production by antigen-specific CD8+ T cells. These results suggest that antigens coupled to the surface of liposomes consisting of unsaturated fatty acids might be pinocytosed by APCs, loaded onto the class I MHC processing pathway, and presented to CD8+ T cells. Thus, these liposome-coupled antigens are expected to be applicable for the development of vaccines that induce cellular immunity

    The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex

    Get PDF
    Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity
    corecore