38 research outputs found

    Spatially Localized Time Shifts of the Perceptual Stream

    Get PDF
    Visual events trigger representations in different locations and times in the brain. In experience, however, these various neural responses refer to a single unified cause. To investigate how representations might be brought into temporal alignment, we attempted to locally manipulate neural processing in such a way that identical, simultaneous sequences would appear temporally misaligned. After adaptation to a 20 Hz sequentially expanding and contracting concentric grating, a running clock presented in the adapted region of the visual field appeared advanced relative to an identical clock presented simultaneously in an unadapted region. No such effect was observed following 5-Hz adaptation. Clock time reports following an exogenous cue showed the same effect of adaptation on perceived time, demonstrating that the apparent temporal misalignment was not mediated by differences in target selection or allocation of attention. This effect was not mediated by the apparent speed of the adapted clock: a clock in a 20-Hz-adapted spatial location appeared slower than a clock in a 5-Hz-adapted location, rather than faster. Furthermore, reaction times for a clock-hand orientation discrimination task were the same following 5- and 20-Hz adaptation, indicating that neural processing latencies were not differentially affected. Altogether, these findings suggest that the fragmented perceptual stream might be actively brought into temporal alignment through adaptive local mechanisms operating in spatially segregated regions of the visual field

    The use of optimal object information in fronto-parallel orientation discrimination

    Get PDF
    AbstractWhen determining an object’s orientation an implicit object axis is formed, based on local contour information. Due to the oblique effect (i.e., the more precise perception of horizontal/vertical orientations than oblique orientations), an object’s orientation will be perceived more precise if the axis is either horizontal or vertical than when the axis is oblique. In this study we investigated which object axis is used to determine orientation for objects containing multiple axes. We tested human subjects in a series of experiments using the method of adjustment. We found that observers always use object axes allowing for the highest object orientation discrimination, namely the axes lying closest to the horizontal/vertical. This implies that the weight the visual system attaches to axial object information is in accordance with the precision with which this information is perceived

    Center–surround inhibition deepens binocular rivalry suppression

    Get PDF
    AbstractWhen dissimilar stimuli are presented to each eye, perception alternates between both images—a phenomenon known as binocular rivalry. It has been shown that stimuli presented in proximity of rival targets modulate the time each target is perceptually dominant. For example, presenting motion to the region surrounding the rival targets decreases the predominance of the same-direction target. Here, using a stationary concentric grating rivaling with a drifting grating, we show that a drifting surround grating also increases the depth of binocular rivalry suppression, as measured by sensitivity to a speed discrimination probe on the rival grating. This was especially so when the surround moved in the same direction as the grating, and was slightly weaker for opposed directions. Suppression in both cases was deeper than a no-surround control condition. We hypothesize that surround suppression often observed in area MT (V5)—a visual area implicated in visual motion perception—is responsible for this increase in suppression. In support of this hypothesis, monocular and binocular surrounds were both effective in increasing suppression depth, as were surrounds contralateral to the probed eye. Static and orthogonal motion surrounds failed to add to the depth of rivalry suppression. These results implicate a higher-level, fully binocular area whose surround inhibition provides an additional source of suppression which sums with rivalry suppression to effectively deepen suppression of an unseen rival target

    Decoding the motion aftereffect in human visual cortex

    No full text
    In the motion aftereffect (MAE), adapting to a moving stimulus causes a subsequently presented stationary stimulus to appear to move in the opposite direction. Recently, the neural basis of the motion aftereffect has received considerable interest, and a number of brain areas have been implicated in the generation of the illusory motion. Here, we use functional magnetic resonance imaging in combination with multivariate pattern classification to directly compare the neural activity evoked during the observation of both real and illusory motions. We show that the perceived illusory motion is not encoded in the same way as real motion in the same direction. Instead, suppression of the adapted direction of motion results in a shift of the population response of motion sensitive neurons in area MT. +, resulting in activation patterns that are in fact more similar to real motion in orthogonal, rather than opposite directions. Although robust motion selectivity was observed in visual areas V1, V2, V3, and V4, this MAE-specific modulation of the population response was only observed in area MT. +. Implications for our understanding of the motion aftereffect, and models of motion perception in general, are discussed.</p

    Attentional modulation of perceptual stabilization

    No full text
    Perceptual priming is generally regarded as a passive and automatic process, as it is obtained even without awareness of the prime. Recent studies have introduced a more active form of perceptual priming in which priming for a subsequent ambiguous stimulus is triggered by the subjective percept, that is, interpretation of a previous ambiguous stimulus. This phenomenon known as stabilization does not require a conscious effort to actively maintain one perceptual interpretation. In this study, we show that distraction of attention, during and even after the prime presentation, interferes with the build-up of perceptual memory for stabilization. This implies that despite the apparent automaticity, stabilization involves an active attentional process for encoding and retention. The disruption during the encoding can be attributed to the reduction in sensory signals for the prime. However, the disruption during the retention suggests that the implicit memory trace of the prime necessitates the attentional resource to fully develop. The active nature of the build-up of perceptual memory for stabilization is consistent with the idea that perceptual memory increases its strength gradually over a few seconds. These findings suggest that seemingly automatic and effortless cognitive processes can compete with online perceptual processing for common attentional resources

    Attentional modulation of adaptation to two-component transparent motion

    No full text
    We have studied the effects of voluntary attention on the induction of motion aftereffects (MAEs). While adapting, observers paid attention to one of two transparently displayed random dot patterns, moving concurrently in opposite directions. Selective attention was found to modulate the susceptibility to motion adaptation very substantially. To measure the strength of the induced MAEs we modulated the signal-to-noise ration of a real motion signal in a random dot pattern that was used to balance the aftereffect. Results obtained for adapting to single motion vectors show that the MAE can be represented as a shift of the psychometric function for motion direction discrimination. Selective attention to the different components of transparent motion altered the susceptibility to adaptation. Shifting attention from one component to the other caused a large shift of the psychometric curves, about 70-75% of the shift measured for the separate components of the transparent adapting stimulus. We conclude that attention can differentiate between spatially superimposed motion vectors and that attention modulates the activity of motion mechanisms before or at the level where adaptation gives rise to MAEs. The results are discussed in light of the role of attention in visual perception and the physiological site for attentional modulation of MAEs.</p

    Limits of Attentive Tracking Reveal Temporal Properties of Attention

    Get PDF
    The maximum speed for attentive tracking of targets was measured in three types of (radial) motion displays: ambiguous motion where only attentive tracking produced an impression of direction, apparent motion, and continuous motion. The upper limit for tracking (about 50 deg s -1 ) was an order of magnitude lower than the maximum speed at which motion can be perceived for some of these stimuli. In all cases but one, the ultimate limit appeared to be one of temporal frequency, 4 -- 8 Hz, not retinal speed or rotation rate. It was argued that this rate reflects the temporal resolution of attention, the maximum rate at which events can be individuated from those that precede or follow them. In one condition, evidence was also found for a speed limit to attentive tracking, a maximum rate at which attention could follow a path around the display. 2000 Elsevier Science Ltd. All rights reserved. Keywords: Attention; Apparent motion; Tracking; Temporal limits www.elsevier.com/locate/visres..

    Velocity perception in a moving observer

    No full text
    Previous research has shown that when a moving stimulus is presented to a moving observer, the perceived speed of the stimulus is affected by vestibular self-motion signals (Hogendoorn, Verstraten, MacDougall, &amp; Alais, 2017. Vision Research 130, 22–30.). This interaction was interpreted as a weighted sum of visual and vestibular motion signals. This interpretation also predicts effects of vestibular self-motion signals on perceived speed. Here, we test this prediction in two experiments. In Experiment 1, moving observers carried out a visual speed discrimination task in order to establish points of subjective equality (PSE) between stimuli presented in the same or opposite direction of self-motion. We observed robust effects of self-motion on perceived speed, with self-motion in the same direction as visual motion resulting in increases in perceived speed and vice versa. These effects were well- described by a limited-width integration window. In Experiment 2, the same observers carried out another speed discrimination task in order to establish discrimination thresholds. According to the Weber-Fechner law, these thresholds are expected to increase or decrease along with perceived speed. However, no effect of self-motion on discrimination thresholds was observed. This pattern of results suggests a limit on speed discrimination performance early in the visual system, with visuo-vestibular integration in later downstream areas. These results are consistent with previous work on heading perception.</p

    Vestibular signals of self-motion modulate global motion perception

    No full text
    Certain visual stimuli can have two possible interpretations. These perceptual interpretations may alternate stochastically, a phenomenon known as bistability. Some classes of bistable stimuli, including binocular rivalry, are sensitive to bias from input through other modalities, such as sound and touch. Here, we address the question whether bistable visual motion stimuli, known as plaids, are affected by vestibular input that is caused by self-motion. In Experiment 1, we show that a vestibular self-motion signal biases the interpretation of the bistable plaid, increasing or decreasing the likelihood of the plaid being perceived as globally coherent or transparently sliding depending on the relationship between self-motion and global visual motion directions. In Experiment 2, we find that when the vestibular direction is orthogonal to the visual direction, the vestibular self-motion signal also biases the direction of one-dimensional motion. This interaction suggests that the effect in Experiment 1 is due to the self-motion vector adding to the visual motion vectors. Together, this demonstrates that the perception of visual motion direction can be systematically affected by concurrent but uninformative and task-irrelevant vestibular input caused by self-motion.</p

    How longer saccade latencies lead to a competition for salience

    No full text
    It has been suggested that independent bottom-up and top-down processes govern saccadic selection. However, recent findings are hard to explain in such terms. We hypothesized that differences in visual-processing time can explain these findings, and we tested this using search displays containing two deviating elements, one requiring a short processing time and one requiring a long processing time. Following short saccade latencies, the deviation requiring less processing time was selected most frequently. This bias disappeared following long saccade latencies. Our results suggest that an element that attracts eye movements following short saccade latencies does so because it is the only element processed at that time. The temporal constraints of processing visual information therefore seem to be a determining factor in saccadic selection. Thus, relative saliency is a time-dependent phenomenon. © The Author(s) 2011
    corecore