91 research outputs found
Multiple high-reward items can be prioritized in working memory but with greater vulnerability to interference
An emerging literature indicates that working memory and attention interact in determining what is retained over time, though the nature of this relationship and the impacts on performance across different task contexts remain to be mapped out. In the present study, four experiments examined whether participants can prioritize one or more βhigh rewardβ items within a four-item target array for the purposes of an immediate cued recall task, and the extent to which this mediates the disruptive impact of a post-display to-be-ignored suffix. All four experiments indicated that endogenous direction of attention towards high-reward items results in their improved recall. Furthermore, increasing the number of high-reward items from 1 to 3 (Experiments 1-3) produces no decline in recall performance for those items, while associating each item in an array with a different reward value results in correspondingly graded levels of recall performance (Experiment 4). These results suggest the ability to exert precise voluntary control in the prioritization of multiple targets. However, in line with recent outcomes drawn from serial visual memory, this endogenously driven focus on high-reward items results in greater susceptibility to exogenous suffix interference, relative to low-reward items. This contrasts with outcomes from cueing paradigms, indicating that different methods of attentional direction may not always result in equivalent outcomes on working memory performance
EEG Correlates of Attentional Load during Multiple Object Tracking
While human subjects tracked a subset of ten identical, randomly-moving objects, event-related potentials (ERPs) were evoked at parieto-occipital sites by task-irrelevant flashes that were superimposed on either tracked (Target) or non-tracked (Distractor) objects. With ERPs as markers of attention, we investigated how allocation of attention varied with tracking load, that is, with the number of objects that were tracked. Flashes on Target discs elicited stronger ERPs than did flashes on Distractor discs; ERP amplitude (0β250 ms) decreased monotonically as load increased from two to three to four (of ten) discs. Amplitude decreased more rapidly for Target discs than Distractor discs. As a result, with increasing tracking loads, the difference between ERPs to Targets and Distractors diminished. This change in ERP amplitudes with load accords well with behavioral performance, suggesting that successful tracking depends upon the relationship between the neural signals associated with attended and non-attended objects
Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors
Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (βclose encountersβ) when speed is increased, resulting in more targetβdistractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pairβs midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of targetβdistractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds
Visual Working Memory Capacity and Proactive Interference
Background: Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Methodology/Principal Findings: Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. Conclusions/Significance: This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals
Opportunity for verbalization does not improve visual change detection performance:A state trace analysis
Evidence suggests that there is a tendency to verbally recode visually-presented information, and that in some cases verbal recoding can boost memory performance. According to multi-component models of working memory, memory performance is increased because task-relevant information is simultaneously maintained in two codes. The possibility of dual encoding is problematic if the goal is to measure capacity for visual information exclusively. To counteract this possibility, articulatory suppression is frequently used with visual change detection tasks specifically to prevent verbalization of visual stimuli. But is this precaution always necessary? There is little reason to believe that concurrent articulation affects performance in typical visual change detection tasks, suggesting that verbal recoding might not be likely to occur in this paradigm, and if not, precautionary articulatory suppression would not always be necessary. We present evidence confirming that articulatory suppression has no discernible effect on performance in a typical visual change-detection task in which abstract patterns are briefly presented. A comprehensive analysis using both descriptive statistics and Bayesian state-trace analysis revealed no evidence for any complex relationship between articulatory suppression and performance that would be consistent with a verbal recoding explanation. Instead, the evidence favors the simpler explanation that verbal strategies were either not deployed in the task or, if they were, were not effective in improving performance, and thus have no influence on visual working memory as measured during visual change detection. We conclude that in visual change detection experiments in which abstract visual stimuli are briefly presented, pre-cautionary articulatory suppression is unnecessary
Bottlenecks of motion processing during a visual glance: the leaky flask model
YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)
The integration of primary care and public health to improve population health: tackling the complex issue of multimorbidity
- β¦