50 research outputs found
Large crowding zones in peripheral vision for briefly presented stimuli
YesWhen a target is flanked by distractors, it becomes more
difficult to identify. In the periphery, this crowding effect
extends over a wide range of target-flanker separations,
called the spatial extent of interaction (EoI). A recent
study showed that the EoI dramatically increases in size
for short presentation durations (Chung & Mansfield,
2009). Here we investigate this duration-EoI relation in
greater detail and show that (a) it holds even when
visibility of the unflanked target is equated for different
durations, (b) the function saturates for durations
shorter than 30 to 80 ms, and (c) the largest EoIs
represent a critical spacing greater than 50% of
eccentricity. We also investigated the effect of same or
different polarity for targets and flankers across different
presentation durations. We found that EoIs for target
and flankers having opposite polarity (one white, the
other black) show the same temporal pattern as for
same polarity stimuli, but are smaller at all durations by
29% to 44%. The observed saturation of the EoI for shortduration
stimuli suggests that crowding follows the locus
of temporal integration. Overall, the results constrain
theories that map crowding zones to fixed spatial
extents or to lateral connections of fixed length in the
cortex.This study was supported by the ERC POSITION 324070 (PC) and a visiting professorship to Anglia Ruskin University from the Leverhulme Trust (HEB)
Recommended from our members
Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory
YesHuman memory is content addressable—i.e., contents of
the memory can be accessed using partial information
about the bound features of a stored item. In this study,
we used a cross-feature cuing technique to examine how
the human visual system encodes, binds, and retains
information about multiple stimulus features within a
set of moving objects. We sought to characterize the
roles of three different features (position, color, and
direction of motion, the latter two of which are
processed preferentially within the ventral and dorsal
visual streams, respectively) in the construction and
maintenance of object representations. We investigated
the extent to which these features are bound together
across the following processing stages: during stimulus
encoding, sensory (iconic) memory, and visual shortterm
memory. Whereas all features examined here can
serve as cues for addressing content, their effectiveness
shows asymmetries and varies according to cue–report
pairings and the stage of information processing and
storage. Position-based indexing theories predict that
position should be more effective as a cue compared to
other features. While we found a privileged role for
position as a cue at the stimulus-encoding stage, position
was not the privileged cue at the sensory and visual
short-term memory stages. Instead, the pattern that
emerged from our findings is one that mirrors the
parallel processing streams in the visual system. This
stream-specific binding and cuing effectiveness
manifests itself in all three stages of information
processing examined here. Finally, we find that the Leaky
Flask model proposed in our previous study is applicable
to all three features
Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion
Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems
The reference frame for encoding and retention of motion depends on stimulus set size
YesThe goal of this study was to investigate the reference
frames used in perceptual encoding and storage of visual
motion information. In our experiments, observers viewed
multiple moving objects and reported the direction of motion
of a randomly selected item. Using a vector-decomposition
technique, we computed performance during smooth pursuit
with respect to a spatiotopic (nonretinotopic) and to a
retinotopic component and compared them with performance
during fixation, which served as the baseline. For the stimulus
encoding stage, which precedes memory, we found that the
reference frame depends on the stimulus set size. For a single
moving target, the spatiotopic reference frame had the most
significant contribution with some additional contribution
from the retinotopic reference frame. When the number of
items increased (Set Sizes 3 to 7), the spatiotopic reference
frame was able to account for the performance. Finally, when
the number of items became larger than 7, the distinction
between reference frames vanished. We interpret this finding
as a switch to a more abstract nonmetric encoding of motion
direction. We found that the retinotopic reference frame was
not used in memory. Taken together with other studies, our
results suggest that, whereas a retinotopic reference frame
may be employed for controlling eye movements, perception
and memory use primarily nonretinotopic reference frames.
Furthermore, the use of nonretinotopic reference frames appears
to be capacity limited. In the case of complex stimuli, the
visual system may use perceptual grouping in order to simplify
the complexity of stimuli or resort to a nonmetric abstract
coding of motion information
Bottlenecks of motion processing during a visual glance: the leaky flask model
YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)
Is the ability to identify deviations in multiple trajectories compromised by amblyopia?
NoAmblyopia results in a severe loss of positional information and in the ability to accurately enumerate objects (V. Sharma, D. M. Levi, & S. A. Klein, 2000). In this study, we asked whether amblyopia also disrupts the ability to track a near-threshold change in the trajectory of a single target amongst multiple similar potential targets. In the first experiment, we examined the precision for detecting a deviation in the linear motion trajectory of a dot by measuring deviation thresholds as a function of the number of moving trajectories (T). As in normal observers, we found that in both eyes of amblyopes, threshold increases steeply as T increases from 1 to 4. Surprisingly, for T = 1-4, thresholds were essentially identical in both eyes of the amblyopes and were similar to those of normal observers. In a second experiment, we measured the precision for detecting a deviation in the orientation of a static, bilinear "trajectory" by again measuring deviation thresholds (i.e., angle discrimination) as a function of the number of oriented line "trajectories" (T). Relative to the nonamblyopic eye, amblyopes show a marked threshold elevation for a static target when T = 1. However, thresholds increased with T with approximately the same slope as in their preferred eye and in the eyes of the normal controls. We conclude that while amblyopia disrupts static angle discrimination, amblyopic dynamic deviation detection thresholds are normal or very nearly so