116 research outputs found
The discrimination of abrupt changes in speed and direction of visual motion
AbstractA random dot pattern that moved within an invisible aperture was used to present two motions contiguously in time. The motions differed slightly either in speed (Experiments 1 and 3) or in direction (Experiments 2 and 4) and the subject had to discriminate the sign of the change (e.g. increment or decrement). The same discrimination task was performed when the two motions were temporally separated by 1 s. In Experiments 1 and 2 discrimination thresholds were measured with motion durations of 0.125, 0.25, 0.5 and 1.0 s and mean speeds of 2, 4, 8, and 16°/s. In Experiments 3 and 4 thresholds were measured with aperture widths of 5 and 20 cm. The discrimination of contiguous motions progressively deteriorated with decreasing duration and mean speed of motion. For the lowest value of duration the Weber fraction for contiguous speeds was more than three times as the Weber fractions for separate speeds. For the same low value of duration the thresholds for discrimination of direction of contiguous motions were only about 50% higher than the thresholds for separate motions. The Weber fraction for contiguous speeds was ca. three times higher with the smaller aperture than with the larger one, provided the ratio ‘aperture width/mean speed’ (i.e. the lifetime of the moving dots) was less than 0.3 s. Aperture width did not affect the discrimination of direction of contiguous motions. The discrimination of contiguous motions is discussed together with the known data for detection of changes in speed and direction. It is suggested that both, detection of changes in speed and discrimination of the sign of speed changes, may be performed by a common visual mechanism
Recommended from our members
Spatial consequences of bridging the saccadic gap
We report six experiments suggesting that conscious perception is actively redrafted to take account of events both before and after the event that is reported. When observers saccade to a stationary object they overestimate its duration, as if the brain were filling in the saccadic gap with the post-saccadic image. We first demonstrate that this illusion holds for moving objects, implying that the perception of time, velocity, and distance traveled become discrepant. We then show that this discrepancy is partially resolved up to 500 ms after a saccade: the perceived offset position of a post-saccadic moving stimulus shows a greater forward mislocalization when pursued after a saccade than during pursuit alone. These data are consistent with the idea that the temporal bias is resolved by the subsequent spatial adjustment to provide a percept that is coherent in its gist but inconsistent in its detail
Animated Edge Textures in Node-Link Diagrams: a Design Space and Initial Evaluation
International audienceNetwork edge data attributes are usually encoded using color, opacity, stroke thickness and stroke pattern, or some combination thereof. In addition to these static variables, it is also possible to animate dynamic particles flowing along the edges. This opens a larger design space of animated edge textures, featuring additional visual encodings that have potential not only in terms of visual mapping capacity but also playfulness and aesthetics. Such animated edge textures have been used in several commercial and design-oriented visualizations, but to our knowledge almost always in a relatively ad hoc manner. We introduce a design space and Web-based framework for generating animated edge textures, and report on an initial evaluation of particle properties – particle speed, pattern and frequency – in terms of visual perception
Recommended from our members
Dissociation between the Perceptual and Saccadic Localization of Moving Objects
Visual processing in the human brain provides the data both for perception and for guiding motor actions. It seems natural that our actions would be directed toward perceived locations of their targets, but it has been proposed that action and perception rely on different visual information [1-4], and this provocative claim has triggered a long-lasting debate [5-7]. Here, in support of this claim, we report a large, robust dissociation between perception and action. We take advantage of a perceptual illusion in which visual motion signals presented within the boundaries of a peripheral moving object can make the object's apparent trajectory deviate by 45° or more from its physical trajectory [8-10], a shift several times larger than the typical discrimination threshold for motion direction [11]. Despite the large perceptual distortion, we found that saccadic eye movements directed to these moving objects clearly targeted locations along their physical rather than apparent trajectories. We show that the perceived trajectory is based on the accumulation of position error determined by prior sensory history-an accumulation of error that is not found for the action toward the same target. We suggest that visual processing for perception and action might diverge in how past information is combined with new visual input, with action relying only on immediate information to track a target, whereas perception builds on previous estimates to construct a conscious representation
Perceptual judgment and saccadic behavior in a spatial distortion with briefly presented stimuli.
When observers are asked to localize the peripheral position of a small probe
with respect to the mid-position of a spatially extended comparison stimulus,
they tend to judge the probe as being more peripheral than the mid-position of
the comparison stimulus. This relative mislocalization seems to emerge from
differences in absolute localization, that is the comparison stimulus is
localized more towards the fovea than the probe. The present study compared
saccadic behaviour and relative localization judgements in three experiments and
determined the quantitative relationship between both measures. The results
showed corresponding effects in localization errors and saccadic behaviour.
Moreover, it was possible to estimate the amount of the relative mislocalization
by means of the saccadic amplitude
The reference frame for encoding and retention of motion depends on stimulus set size
YesThe goal of this study was to investigate the reference
frames used in perceptual encoding and storage of visual
motion information. In our experiments, observers viewed
multiple moving objects and reported the direction of motion
of a randomly selected item. Using a vector-decomposition
technique, we computed performance during smooth pursuit
with respect to a spatiotopic (nonretinotopic) and to a
retinotopic component and compared them with performance
during fixation, which served as the baseline. For the stimulus
encoding stage, which precedes memory, we found that the
reference frame depends on the stimulus set size. For a single
moving target, the spatiotopic reference frame had the most
significant contribution with some additional contribution
from the retinotopic reference frame. When the number of
items increased (Set Sizes 3 to 7), the spatiotopic reference
frame was able to account for the performance. Finally, when
the number of items became larger than 7, the distinction
between reference frames vanished. We interpret this finding
as a switch to a more abstract nonmetric encoding of motion
direction. We found that the retinotopic reference frame was
not used in memory. Taken together with other studies, our
results suggest that, whereas a retinotopic reference frame
may be employed for controlling eye movements, perception
and memory use primarily nonretinotopic reference frames.
Furthermore, the use of nonretinotopic reference frames appears
to be capacity limited. In the case of complex stimuli, the
visual system may use perceptual grouping in order to simplify
the complexity of stimuli or resort to a nonmetric abstract
coding of motion information
Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion
Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems
Neural correlates of audiovisual motion capture
Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages
- …