27 research outputs found

    Bottlenecks of motion processing during a visual glance: the leaky flask model

    Get PDF
    YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)

    Results of linear fits to data from Experiment 1.

    No full text
    <p>Top: Transformed performance as a function of target set-size. Bottom: Transformed performance as a function of distractor set-size.</p

    Precision (A) and intake (B) as a function of target and distractor set-sizes.

    No full text
    <p>Different panels represent different cue delays. Data points correspond to the mean across observers (N = 4) and ±1 SEM. Lines represent linear fits.</p

    The Leaky Flask Model.

    No full text
    <p>The single leaky hourglass of Fig. 1 is replaced by two leaky flasks, one for precision and one for intake to highlight the different characteristics of these two aspects of bottlenecks. The top portions are narrower than the hourglass model to illustrate the bottlenecks occurring at the stages prior to VSTM. Also shown in this figure are the constraints imposed by attentional processes. While the selection function of attention applies to all three stages, the filtering function of attention applies mainly to the intake of sensory memory stage.</p

    Precision (A) and intake (B) as a function of target set-size.

    No full text
    <p>Also included in the plots are guess rate (1-w) and standard deviation (σ). Note that the left and right y-axes have different offsets and scales. Data points correspond to the mean across observers (N = 4) and error bars represent ±1 SEM.</p
    corecore