170 research outputs found

    Dynamic Perceptual Changes in Audiovisual Simultaneity

    Get PDF
    Background: The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset. Methodology/Principal Findings: Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant stimulus. We investigated the mechanisms underlying the spontaneous alternations in this audiovisual illusion and found that attention plays a crucial role. When attention was distracted from the stimulus, the perceptual transitions disappeared. When attention was directed to a visual event, the perceived timing of an auditory event was attracted towards that event. Conclusions/Significance: This multistable display illustrates how flexible perceived timing can be, and at the same time offers a paradigm to dissociate perceptual from stimulus-driven factors in crossmodal feature binding. Our findings suggest that the perception of crossmodal synchrony depends on perceptual binding of audiovisual stimuli as a common event

    Spatially Localized Time Shifts of the Perceptual Stream

    Get PDF
    Visual events trigger representations in different locations and times in the brain. In experience, however, these various neural responses refer to a single unified cause. To investigate how representations might be brought into temporal alignment, we attempted to locally manipulate neural processing in such a way that identical, simultaneous sequences would appear temporally misaligned. After adaptation to a 20 Hz sequentially expanding and contracting concentric grating, a running clock presented in the adapted region of the visual field appeared advanced relative to an identical clock presented simultaneously in an unadapted region. No such effect was observed following 5-Hz adaptation. Clock time reports following an exogenous cue showed the same effect of adaptation on perceived time, demonstrating that the apparent temporal misalignment was not mediated by differences in target selection or allocation of attention. This effect was not mediated by the apparent speed of the adapted clock: a clock in a 20-Hz-adapted spatial location appeared slower than a clock in a 5-Hz-adapted location, rather than faster. Furthermore, reaction times for a clock-hand orientation discrimination task were the same following 5- and 20-Hz adaptation, indicating that neural processing latencies were not differentially affected. Altogether, these findings suggest that the fragmented perceptual stream might be actively brought into temporal alignment through adaptive local mechanisms operating in spatially segregated regions of the visual field

    Time dilation in dynamic visual display

    Get PDF
    How does the brain estimate time? This old question has led to many biological and psychological models of time perception (R. A. Block, 1989; P. Fraisse, 1963; J. Gibbon, 1977; D. L. I. Zakay, 1989). Because time cannot be directly measured at a given moment, it has been proposed that the brain estimates time based on the number of changes in an event (S. W. Brown, 1995; P. Fraisse, 1963; W. D. Poynter, 1989). Consistent with this idea, dynamic visual stimuli are known to lengthen perceived time (J. F. Brown, 1931; S. Goldstone & W. T. Lhamon, 1974; W. T. Lhamon & S. Goldstone, 1974, C. O. Z. Roelofs & W. P. C. Zeeman, 1951). However, the kind of information that constitutes the basis for time perception remains unresolved. Here, we show that the temporal frequency of a stimulus serves as the “clock” for perceived duration. Other aspects of changes, such as speed or coherence, were found to be inconsequential. Time dilation saturated at a temporal frequency of 4–8 Hz. These results suggest that the clock governing perceived time has its basis at early processing stages. The possible links between models of time perception and neurophysiological functions of early visual areas are discussed

    Титульні сторінки та зміст

    Get PDF
    It has recently been shown that contact between one’s own limbs (self-touch) reduces the perceived intensity of pain, over and above the well-known modulation of pain by simultaneous colocalized tactile input Kammers et al. (Curr Biol 20:1819–1822, 2010). Here, we investigate how self-touch modulates somatosensory evoked potentials (SEPs) evoked by afferent somatosensory input. We show that the P100 SEP component, which has previously been implicated in the conscious perception of a tactile stimulus, is enhanced during self-touch, as compared to when one is touching nothing, an inanimate object, or another person. A follow-up experiment showed that there was no effect of self-touch on SEPs when the body parts in contact were not symmetric. Altogether, our findings suggest the interpretation that the secondary somatosensory cortex might underlie the specific analgesic effect of self-touch

    The use of optimal object information in fronto-parallel orientation discrimination

    Get PDF
    AbstractWhen determining an object’s orientation an implicit object axis is formed, based on local contour information. Due to the oblique effect (i.e., the more precise perception of horizontal/vertical orientations than oblique orientations), an object’s orientation will be perceived more precise if the axis is either horizontal or vertical than when the axis is oblique. In this study we investigated which object axis is used to determine orientation for objects containing multiple axes. We tested human subjects in a series of experiments using the method of adjustment. We found that observers always use object axes allowing for the highest object orientation discrimination, namely the axes lying closest to the horizontal/vertical. This implies that the weight the visual system attaches to axial object information is in accordance with the precision with which this information is perceived

    Center–surround inhibition deepens binocular rivalry suppression

    Get PDF
    AbstractWhen dissimilar stimuli are presented to each eye, perception alternates between both images—a phenomenon known as binocular rivalry. It has been shown that stimuli presented in proximity of rival targets modulate the time each target is perceptually dominant. For example, presenting motion to the region surrounding the rival targets decreases the predominance of the same-direction target. Here, using a stationary concentric grating rivaling with a drifting grating, we show that a drifting surround grating also increases the depth of binocular rivalry suppression, as measured by sensitivity to a speed discrimination probe on the rival grating. This was especially so when the surround moved in the same direction as the grating, and was slightly weaker for opposed directions. Suppression in both cases was deeper than a no-surround control condition. We hypothesize that surround suppression often observed in area MT (V5)—a visual area implicated in visual motion perception—is responsible for this increase in suppression. In support of this hypothesis, monocular and binocular surrounds were both effective in increasing suppression depth, as were surrounds contralateral to the probed eye. Static and orthogonal motion surrounds failed to add to the depth of rivalry suppression. These results implicate a higher-level, fully binocular area whose surround inhibition provides an additional source of suppression which sums with rivalry suppression to effectively deepen suppression of an unseen rival target

    Exploring the Anatomical Basis of Effective Connectivity Models with DTI-Based Fiber Tractography

    Get PDF
    Diffusion tensor imaging (DTI) is considered to be a promising tool for revealing the anatomical basis of functional networks. In this study, we investigate the potential of DTI to provide the anatomical basis of paths that are used in studies of effective connectivity, using structural equation modeling. We have taken regions of interest from eight previously published studies, and examined the connectivity as defined by DTI-based fiber tractography between these regions. The resulting fiber tracts were then compared with the paths proposed in the original studies. For a substantial number of connections, we found fiber tracts that corresponded to the proposed paths. More importantly, we have also identified a number of cases in which tractography suggested direct connections which were not included in the original analyses. We therefore conclude that DTI-based fiber tractography can be a valuable tool to study the anatomical basis of functional networks

    The Scope and Limits of Top-Down Attention in Unconscious Visual Processing

    Get PDF
    Attentional selection plays a critical role in conscious perception. When attention is diverted, even salient stimuli fail to reach visual awareness [1,2]. Attention can be voluntarily directed to a spatial location [3,4,5,6,7,8,9] or a visual feature [9,10,11,12,13,14] for facilitating the processing of information relevant to current goals. In everyday situations, attention and awareness are tightly coupled. This has led some to suggest that attention and awareness might be based on a common neural foundation [15,16], whereas others argue that they are mediated by distinct mechanisms [17,18,19]. A body of evidence shows that visual stimuli can be processed at multiple stages of the visual-processing streams without evoking visual awareness [20,21,22]. To illuminate the relationship between visual attention and conscious perception, we investigated whether top-down attention can target and modulate the neural representations of unconsciously processed visual stimuli. Our experiments show that spatial attention can target only consciously perceived stimuli, whereas feature-based attention can modulate the processing of invisible stimuli. The attentional modulation of unconscious signals implies that attention and awareness can be dissociated, challenging a simplistic view of the boundary between conscious and unconscious visual processing

    Saccadic selection and crowding in visual search:stronger lateral masking leads to shorter search times

    Get PDF
    We investigated the role of crowding in saccadic selection during visual search. To guide eye movements, often information from the visual periphery is used. Crowding is known to deteriorate the quality of peripheral information. In four search experiments, we studied the role of crowding, by accompanying individual search elements by flankers. Varying the difference between target and flankers allowed us to manipulate crowding strength throughout the stimulus. We found that eye movements are biased toward areas with little crowding for conditions where a target could be discriminated peripherally. Interestingly, for conditions in which the target could not be discriminated peripherally, this bias reversed to areas with strong crowding. This led to shorter search times for a target presented in areas with stronger crowding, compared to a target presented in areas with less crowding. These findings suggest a dual role for crowding in visual search. The presence of flankers similar to the target deteriorates the quality of the peripheral target signal but can also attract eye movements, as more potential targets are present over the area

    What is Grouping during Binocular Rivalry?

    Get PDF
    During binocular rivalry, perception alternates between dissimilar images presented dichoptically. Although perception during rivalry is believed to originate from competition at a local level, different rivalry zones are not independent: rival targets that are spaced apart but have similar features tend to be dominant at the same time. We investigated grouping of spatially separated rival targets presented to the same or to different eyes and presented in the same or in different hemifields. We found eye-of-origin to be the strongest cue for grouping during binocular rivalry. Grouping was additionally affected by orientation: identical orientations were grouped longer than dissimilar orientations, even when presented to different eyes. Our results suggest that eye-based and orientation-based grouping is independent and additive in nature. Grouping effects were further modulated by the distribution of the targets across the visual field. That is, grouping within the same hemifield can be stronger or weaker than between hemifields, depending on the eye-of-origin of the grouped targets. We also quantified the contribution of the previous cues to grouping of two images during binocular rivalry. These quantifications can be successfully used to predict the dominance durations of different studies. Incorporating the relative contribution of different cues to grouping, and the dependency on hemifield, into future models of binocular rivalry will prove useful in our understanding of the functional and anatomical basis of the phenomenon of binocular rivalry
    corecore