76 research outputs found

    Distractor-interference reduction is dimensionally constrained

    Get PDF
    ABSTRACTThe dimension-weighting account predicts that if observers search for a target standing out from the background in a particular dimension, they cannot readily ignore a distractor standing out in the same dimension. This prediction is tested here by asking two groups of observers to search for an orientation target or a luminance target, respectively, and presenting an additional distractor defined in either the respectively same dimension or the other dimension. Notably, in this cross-over design, the physically identical distractors served both as same- and different-dimension distractors, depending on target condition. While same-dimension distractors gave rise to massive interference, different-dimension distractors caused much weaker (though still substantial) interference. This result is most readily explained by the dimension-weighting account: different-dimension distractors are considerably down-weighted but not fully suppressed. Furthermore, same- and different dimension distractors delayed response times even when considering only the fastest (down to 2.5%) of trials, indicating that interference is exerted consistently on each trial, rather than probabilistically on some trials. Our results put strong constraints on models of distractor handling in visual search

    Biasing Allocations of Attention via Selective Weighting of Saliency Signals: Behavioral and Neuroimaging Evidence for the Dimension-Weighting Account

    Get PDF
    Objects that stand out from the environment tend to be of behavioral relevance, and the visual system is tuned to preferably process these salient objects by allocating focused attention. However, attention is not just passively (bottom-up) driven by stimulus features, but previous experiences and task goals exert strong biases toward attending or actively ignoring salient objects. The core and eponymous assumption of the dimension-weighting account (DWA) is that these top-down biases are not as flexible as one would like them to be; rather, they are subject to dimensional constraints. In particular, DWA assumes that people can often not search for objects that have a particular feature but only for objects that stand out from the environment (i.e., that are salient) in a particular feature dimension. We review behavioral and neuroimaging evidence for such dimensional constraints in three areas: search history, voluntary target enhancement, and distractor handling. The first two have been the focus of research on DWA since its inception and the latter the subject of our more recent research. Additionally, we discuss various challenges to the DWA and its relation to other prominent theories on top-down influences in visual search

    Massive Effects of Saliency on Information Processing in Visual Working Memory

    Get PDF
    Limitations in the ability to temporarily represent information in visual working memory (VWM) are crucial for visual cognition. Whether VWM processing is dependent on an object’s saliency (i.e., how much it stands out) has been neglected in VWM research. Therefore, we developed a novel VWM task that allows direct control over saliency. In three experiments with this task (on 10, 31, and 60 adults, respectively), we consistently found that VWM performance is strongly and parametrically influenced by saliency and that both an object’s relative saliency (compared with concurrently presented objects) and absolute saliency influence VWM processing. We also demonstrated that this effect is indeed due to bottom-up saliency rather than differential fit between each object and the top-down attentional template. A simple computational model assuming that VWM performance is determined by the weighted sum of absolute and relative saliency accounts well for the observed data patterns

    Two good reasons to say 'change!'--ensemble representations as well as item representations impact standard measures of VWM capacity

    Get PDF
    Visual working memory (VWM) is a central bottleneck in human information processing. Its capacity is most often measured in terms of how many individual‐item representations VWM can hold (k). In the standard task employed to estimate k, an array of highly discriminable colour patches is maintained and, after a short retention interval, compared to a test display (change detection). Recent research has shown that with more complex, structured displays, change‐detection performance is, in addition to individual‐item representations, supported by ensemble representations formed as a result of spatial subgroupings. Here, by asking participants to additionally localize the change, we reveal indication for an influence of ensemble representations even in the very simple, unstructured displays of the colour‐patch change‐detection task. Critically, pure‐item models from which standard formulae of k are derived do not consider ensemble representations and, therefore, potentially overestimate k. To gauge this overestimation, we develop an item‐plus‐ensemble model of change detection and change localization. Estimates of k from this new model are about 1 item (~30%) lower than the estimates from traditional pure‐item models, even if derived from the same data sets

    Estimating the Timing of Cognitive Operations With MEG/EEG Latency Measures: A Primer, a Brief Tutorial, and an Implementation of Various Methods

    Get PDF
    The major advantage of MEG/EEG over other neuroimaging methods is its high temporal resolution. Examining the latency of well-studied components can provide a window into the dynamics of cognitive operations beyond traditional response-time (RT) measurements. While RTs reflect the cumulative duration of all time-consuming cognitive operations involved in a task, component latencies can partition this time into cognitively meaningful sub-steps. Surprisingly, most MEG/EEG studies neglect this advantage and restrict analyses to component amplitudes without considering latencies. The major reasons for this neglect might be that, first, the most easily accessible latency measure (peak latency) is often unreliable and that, second, more complex measures are difficult to conceive, implement, and parametrize. The present article illustrates the key advantages and disadvantages of the three main types of latency-measures (peak latency, onset latency, and percent-area latency), introduces a MATLAB function that extracts all these measures and is compatible with common analysis tools, discusses the most important parameter choices for different research questions and components of interest, and demonstrates its use by various group analyses on one planar gradiometer pair of the publicly available Wakeman and Henson (2015) data. The introduced function can extract from group data not only single-subject latencies, but also grand-average and jackknife latencies. Furthermore, it gives the choice between different approaches to automatically set baselines and anchor points for latency estimation, approaches that were partly developed by me and that capitalize on the informational richness of MEG/EEG data. Although the function comes with a wide range of customization parameters, the default parameters are set so that even beginners get reasonable results. Graphical depictions of latency estimates, baselines, and anchor points overlaid on individual averages further support learning, understanding and trouble-shooting. Once extracted, latency estimates can be submitted to any analysis also available for (averaged) RTs, including tests for mean differences, correlational approaches and cognitive modeling

    Attentional capture in visual search: capture and post-capture dynamics revealed by EEG

    Get PDF
    Sometimes, salient-but-irrelevant objects (distractors) presented concurrently with a search target cannot be ignored and attention is involuntarily allocated towards the distractor first. Several studies have provided electrophysiological evidence for involuntary misallocations of attention towards a distractor, but much less is known about the mechanisms that are needed to overcome a misallocation and re-allocate attention towards the concurrently presented target. In our study, electrophysiological markers of attentional mechanisms indicate that (i) the distractor captures attention before the target is attended, (ii) a misallocation of attention is terminated actively (instead of attention fading passively), and (iii) the misallocation of attention towards a distractor delays the attention allocation towards the target (rather than just delaying some post-attentive process involved in response selection). This provides the most complete demonstration, to date, of the chain of attentional mechanisms that are evoked when attention is misguided and recovers from capture within a search display

    The mental representation in mental rotation : its content, timing, and neuronal source

    Get PDF
    What is rotated in mental rotation? The implicitly or explicitly most widely accepted assumption is that the rotated representation is a visual mental image. We here provide converging evidence that instead mental rotation is a process specialized on a certain type of spatial information. As a basis, we here develop a general theory on how to manipulate and empirically examine representational content. One technique to examine the content of the representation in mental rotation is to measure the influence of stimulus characteristics on rotational speed. Experiment 1a and 1b show that the rotational speed of university students (10 men, 10 women and 10 men, 11 women, respectively) is influenced exclusively by the amount of represented orientation-dependent spatial-relational information but not by orientation-independent spatial-relational information, visual complexity, or the number of stimulus parts. Obviously, only explicit orientation-dependent spatial-relational information in an abstract, nonvisual form is rotated. As information in mental-rotation tasks is initially presented visually, a nonvisual representation during rotation implies that at some point during processing information is recoded. Experiment 2 provides more direct evidence for this recoding. While university students (12 men, 12 women) performed our mental-rotation task, we recorded their EEG in order to extract slow potentials, which are sensitive to working-memory load. During initial stimulus processing, slow potentials were sensitive to the amount of orientation-independent information or to the visual complexity of the stimuli. During rotation, in contrast, slow potentials were sensitive to the amount of orientation-dependent information only. This change in slow potential behavior constitutes evidence for the hypothesized recoding of the content of the mental representation from a visual into a nonvisual form. We further assumed that, in order to be accessible for the process of mental rotation, orientation-dependent information must be represented in those brain areas that are also responsible for mental rotation proper. Indeed, in an fMRI study on university students (12 men, 12 women) the very same set of brain areas was specifically activated by both the amount of mental rotation and of orientation-dependent information. The amount of orientation-independent information/visual complexity, in contrast, influenced activation in a different set of brain areas. Together, all activated areas constitute the so-called mental rotation network. In sum, the present work provides a general theory and several techniques to examine mental representations and employs these techniques to identify the content, timing, and neuronal source of the mental representation in mental rotation.Was wird bei mentaler Rotation rotiert? Die implizit oder explizit am weitesten verbreitete Annahme ist, dass die rotierte Repräsentation ein mentales Bild ist. Wir berichten hier konvergierende Evidenz, dass mentale Rotation stattdessen ein auf einen bestimmten Typ räumlicher Information spezialisierter Prozess ist. Als Grundlage entwickeln wir hier zunächst eine allgemeine Theorie, wie der Inhalt einer Repräsentation manipuliert und empirisch untersucht werden kann. Eine Technik, den Inhalt der Repräsentation bei mentaler Rotation zu untersuchen, ist den Einfluss von Stimuluseigenschaften auf die Rotationsgeschwindigkeit zu messen. Experiment 1a und 1b zeigen, dass die Rotationsgeschwindigkeit von Studenten (10 Männer, 10 Frauen bzw. 10 Männer, 11 Frauen) ausschließlich von der Menge an repräsentierter orientierungsabhängiger räumlich-relationaler Information, aber nicht von orientierungsunabhängiger räumlich-relationaler Information, visueller Komplexität oder der Anzahl an Stimulusteilen beeinflusst wird. Offensichtlich wird nur explizite orientierungsabhängige räumlich-relationale Information in einer abstrakten, nicht-visuellen Form rotiert. Da Informationen in Experimenten zur mentalen Rotation ursprünglich visuell präsentiert werden, impliziert eine nicht-visuelle Repräsentation während der Rotation, dass, zu einem gewissen Punkt während der Verarbeitung, Informationen umkodiert werden. Experiment 2 liefert direktere Evidenz für diese Umkodierung: Während Studenten (12 Männer, 12 Frauen) an unserem Experiment zur mentalen Rotation teilnahmen, zeichneten wir ihr EEG auf, um langsame Potentiale zu extrahieren, welche sensitiv für Arbeitsgedächtnisbelastung sind. Zu Beginn der Stimulus-Verarbeitung waren diese langsamen Potentiale sensitiv für die Menge an orientierungsunabhängiger Information bzw. für die visuelle Komplexität der Stimuli. Im Gegensatz dazu waren die langsamen Potentiale während der Rotation ausschließlich sensitiv für die Menge an orientierungsabhängiger Information. Diese Veränderung der langsamen Potentiale stellt Evidenz für die postulierte Umkodierung des Inhalts der mentalen Repräsentation von einer visuellen in eine nicht-visuelle Form dar. Weiterhin nahmen wir an, dass, damit sie für den Prozess der mentalen Rotation zugänglich ist, orientierungsabhängige Information in den Hirnarealen repräsentiert sein muss, die auch für die mentale Rotation selbst zuständig sind. Tatsächlich war in einer fMRI-Studie mit Studenten (12 Männer, 12 Frauen) dasselbe Netz an Hirnarealen, das spezifisch durch die Menge an mentaler Rotation aktiviert war, auch spezifisch durch die Menge an orientierungsabhängiger Information aktiviert. Die Menge an orientierungsunabhängiger Information hingegen beeinflusste die Aktivation in einem anderen Netz an Hirnarealen. In ihrer Gesamtheit stellen alle aktivierten Areale das sogenannte Netzwerk der mentalen Rotation dar. Zusammengefasst liefert die vorliegende Arbeit eine allgemeine Theorie und mehrere Techniken zur Untersuchung von mentalen Repräsentationen und setzt diese Techniken ein, um den Inhalt, den zeitlichen Verlauf und den neuronalen Ursprung der mentalen Repräsentation bei mentaler Rotation zu untersuchen

    An appeal against the item's death sentence: accounting for diagnostic data patterns with an item-based model of visual search

    Get PDF
    We show that our item-based model, competitive guided search, accounts for the empirical patterns that Hulleman & Olivers (H&O) invoke against item-based models, and we highlight recently reported diagnostic data that challenge their approach. We advise against “forsaking the item” unless and until a full fixation-based model is shown to be superior to extant item-based models

    Estimating the Timing of Cognitive Operations With MEG/EEG Latency Measures: A Primer, a Brief Tutorial, and an Implementation of Various Methods

    Get PDF
    The major advantage of MEG/EEG over other neuroimaging methods is its high temporal resolution. Examining the latency of well-studied components can provide a window into the dynamics of cognitive operations beyond traditional response-time (RT) measurements. While RTs reflect the cumulative duration of all time-consuming cognitive operations involved in a task, component latencies can partition this time into cognitively meaningful sub-steps. Surprisingly, most MEG/EEG studies neglect this advantage and restrict analyses to component amplitudes without considering latencies. The major reasons for this neglect might be that, first, the most easily accessible latency measure (peak latency) is often unreliable and that, second, more complex measures are difficult to conceive, implement, and parametrize. The present article illustrates the key advantages and disadvantages of the three main types of latency-measures (peak latency, onset latency, and percent-area latency), introduces a MATLAB function that extracts all these measures and is compatible with common analysis tools, discusses the most important parameter choices for different research questions and components of interest, and demonstrates its use by various group analyses on one planar gradiometer pair of the publicly available Wakeman and Henson (2015) data. The introduced function can extract from group data not only single-subject latencies, but also grand-average and jackknife latencies. Furthermore, it gives the choice between different approaches to automatically set baselines and anchor points for latency estimation, approaches that were partly developed by me and that capitalize on the informational richness of MEG/EEG data. Although the function comes with a wide range of customization parameters, the default parameters are set so that even beginners get reasonable results. Graphical depictions of latency estimates, baselines, and anchor points overlaid on individual averages further support learning, understanding and trouble-shooting. Once extracted, latency estimates can be submitted to any analysis also available for (averaged) RTs, including tests for mean differences, correlational approaches and cognitive modeling

    Search efficiency as a function of target saliency: The transition from inefficient to efficient search and beyond.

    Get PDF
    Searching for an object among distracting objects is a common daily task. These searches differ in efficiency. Some are so difficult that each object must be inspected in turn, whereas others are so easy that the target object directly catches the observer’s eye. In 4 experiments, the difficulty of searching for an orientation-defined target was parametrically manipulated between blocks of trials via the target–distractor orientation contrast. We observed a smooth transition from inefficient to efficient search with increasing orientation contrast. When contrast was high, search slopes were flat (indicating pop-out); when contrast was low, slopes were steep (indicating serial search). At the transition from inefficient to efficient search, search slopes were flat for target-present trials and steep for target-absent trials within the same orientation-contrast block—suggesting that participants adapted their behavior on target-absent trials to the most difficult, rather than the average, target-present trials of each block. Furthermore, even when search slopes were flat, indicative of pop-out, search continued to become faster with increasing contrast. These observations provide several new constraints for models of visual search and indicate that differences between search tasks that were traditionally considered qualitative in nature might actually be due to purely quantitative differences in target discriminability. (PsycINFO Database Record (c) 2016 APA, all rights reserved
    corecore