18 research outputs found

    EEG Correlates of Attentional Load during Multiple Object Tracking

    Get PDF
    While human subjects tracked a subset of ten identical, randomly-moving objects, event-related potentials (ERPs) were evoked at parieto-occipital sites by task-irrelevant flashes that were superimposed on either tracked (Target) or non-tracked (Distractor) objects. With ERPs as markers of attention, we investigated how allocation of attention varied with tracking load, that is, with the number of objects that were tracked. Flashes on Target discs elicited stronger ERPs than did flashes on Distractor discs; ERP amplitude (0–250 ms) decreased monotonically as load increased from two to three to four (of ten) discs. Amplitude decreased more rapidly for Target discs than Distractor discs. As a result, with increasing tracking loads, the difference between ERPs to Targets and Distractors diminished. This change in ERP amplitudes with load accords well with behavioral performance, suggesting that successful tracking depends upon the relationship between the neural signals associated with attended and non-attended objects

    Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors

    Get PDF
    Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (“close encounters”) when speed is increased, resulting in more target–distractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pair’s midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of target–distractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds

    The reference frame for encoding and retention of motion depends on stimulus set size

    Get PDF
    YesThe goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information

    Bottlenecks of motion processing during a visual glance: the leaky flask model

    Get PDF
    YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)

    Natural scenes can be identified as rapidly as individual features

    Get PDF
    Can observers determine the gist of a natural scene in a purely feedforward manner, or does this process require deliberation and feedback? Observers can recognise images that are presented for very brief periods of time before being masked. It is unclear whether this recognition process occurs in a purely feedforward manner or whether feedback from higher cortical areas to lower cortical areas is necessary. The current study revealed that the minimum presentation time required to identify or to determine the gist of a natural scene was no different from that required to determine the orientation or colour of an isolated line. Conversely, a visual task that would be expected to necessitate feedback (determining whether an image contained exactly six lines) required a significantly greater minimum presentation time. Assuming that the orientation or colour of an isolated line can be determined in a purely feedforward manner, these results indicate that the identification and the determination of the gist of a natural scene can also be performed in a purely feedforward manner. These results challenge a number of theories of visual recognition that require feedback

    Using meta-predictions to identify experts in the crowd when past performance is unknown.

    Get PDF
    A common approach to improving probabilistic forecasts is to identify and leverage the forecasts from experts in the crowd based on forecasters' performance on prior questions with known outcomes. However, such information is often unavailable to decision-makers on many forecasting problems, and thus it can be difficult to identify and leverage expertise. In the current paper, we propose a novel algorithm for aggregating probabilistic forecasts using forecasters' meta-predictions about what other forecasters will predict. We test the performance of an extremised version of our algorithm against current forecasting approaches in the literature and show that our algorithm significantly outperforms all other approaches on a large collection of 500 binary decision problems varying in five levels of difficulty. The success of our algorithm demonstrates the potential of using meta-predictions to leverage latent expertise in environments where forecasters' expertise cannot otherwise be easily identified

    Failure to detect meaning in RSVP at 27 ms per picture

    Get PDF
    The human visual system has the remarkable ability to rapidly detect meaning from visual stimuli. Potter, Wyble, Hagmann, and McCourt (Attention, Perception, & Psychophysics, 76, 270-279, 2014) tested the minimum viewing time required to obtain meaning from a stream of pictures shown in a rapid serial visual presentation (RSVP) sequence containing either six or 12 pictures. They reported that observers could detect the presence of a target picture specified by name (e.g., smiling couple) even when the pictures in the sequence were presented for just 13 ms each. Potter et al. claimed that this was insufficient time for feedback processing to occur, so feedforward processing alone must be able to generate conscious awareness of the target pictures. A potential confound in their study is that the pictures in the RSVP sequence sometime contained areas with no high-contrast edges, and so may not have adequately masked each other. Consequently, iconic memories of portions of the target pictures may have persisted in the visual system, thereby increasing the effective presentation time. Our study addressed this issue by redoing the Potter et al. study, but using four different types of masks. We found that when adequate masking was used, no evidence emerged that observers could detect the presence of a specific target picture, even when each picture in the RSVP sequence was presented for 27 ms. On the basis of these findings, we cannot rule out the possibility that feedback processing is necessary for individual pictures to be recognized

    Attribute amnesia is greatly reduced with novel stimuli

    Get PDF
    Attribute amnesia is the counterintuitive phenomenon where observers are unable to report a salient aspect of a stimulus (e.g., its colour or its identity) immediately after the stimulus was presented, despite both attending to and processing the stimulus. Almost all previous attribute amnesia studies used highly familiar stimuli. Our study investigated whether attribute amnesia would also occur for unfamiliar stimuli. We conducted four experiments using stimuli that were highly familiar (colours or repeated animal images) or that were unfamiliar to the observers (unique animal images). Our results revealed that attribute amnesia was present for both sets of familiar stimuli, colour (p < .001) and repeated animals (p = .001); but was greatly attenuated, and possibly eliminated, when the stimuli were unique animals (p = .02). Our data shows that attribute amnesia is greatly reduced for novel stimuli

    Detecting Unidentified Changes

    Get PDF
    Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene
    corecore