116 research outputs found

    The discrimination of abrupt changes in speed and direction of visual motion

    Get PDF
    AbstractA random dot pattern that moved within an invisible aperture was used to present two motions contiguously in time. The motions differed slightly either in speed (Experiments 1 and 3) or in direction (Experiments 2 and 4) and the subject had to discriminate the sign of the change (e.g. increment or decrement). The same discrimination task was performed when the two motions were temporally separated by 1 s. In Experiments 1 and 2 discrimination thresholds were measured with motion durations of 0.125, 0.25, 0.5 and 1.0 s and mean speeds of 2, 4, 8, and 16°/s. In Experiments 3 and 4 thresholds were measured with aperture widths of 5 and 20 cm. The discrimination of contiguous motions progressively deteriorated with decreasing duration and mean speed of motion. For the lowest value of duration the Weber fraction for contiguous speeds was more than three times as the Weber fractions for separate speeds. For the same low value of duration the thresholds for discrimination of direction of contiguous motions were only about 50% higher than the thresholds for separate motions. The Weber fraction for contiguous speeds was ca. three times higher with the smaller aperture than with the larger one, provided the ratio ‘aperture width/mean speed’ (i.e. the lifetime of the moving dots) was less than 0.3 s. Aperture width did not affect the discrimination of direction of contiguous motions. The discrimination of contiguous motions is discussed together with the known data for detection of changes in speed and direction. It is suggested that both, detection of changes in speed and discrimination of the sign of speed changes, may be performed by a common visual mechanism

    Animated Edge Textures in Node-Link Diagrams: a Design Space and Initial Evaluation

    Get PDF
    International audienceNetwork edge data attributes are usually encoded using color, opacity, stroke thickness and stroke pattern, or some combination thereof. In addition to these static variables, it is also possible to animate dynamic particles flowing along the edges. This opens a larger design space of animated edge textures, featuring additional visual encodings that have potential not only in terms of visual mapping capacity but also playfulness and aesthetics. Such animated edge textures have been used in several commercial and design-oriented visualizations, but to our knowledge almost always in a relatively ad hoc manner. We introduce a design space and Web-based framework for generating animated edge textures, and report on an initial evaluation of particle properties – particle speed, pattern and frequency – in terms of visual perception

    Perceptual judgment and saccadic behavior in a spatial distortion with briefly presented stimuli.

    Get PDF
    When observers are asked to localize the peripheral position of a small probe with respect to the mid-position of a spatially extended comparison stimulus, they tend to judge the probe as being more peripheral than the mid-position of the comparison stimulus. This relative mislocalization seems to emerge from differences in absolute localization, that is the comparison stimulus is localized more towards the fovea than the probe. The present study compared saccadic behaviour and relative localization judgements in three experiments and determined the quantitative relationship between both measures. The results showed corresponding effects in localization errors and saccadic behaviour. Moreover, it was possible to estimate the amount of the relative mislocalization by means of the saccadic amplitude

    The reference frame for encoding and retention of motion depends on stimulus set size

    Get PDF
    YesThe goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information

    Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion

    Get PDF
    Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages
    corecore