757 research outputs found

    Cortical processing and perceived timing

    Get PDF
    As of yet, it is unclear how we determine relative perceived timing. One controversial suggestion is that timing perception might be related to when analyses are completed in the cortex of the brain. An alternate proposal suggests that perceived timing is instead related to the point in time at which cortical analyses commence. Accordingly, timing illusions should not occur owing to cortical analyses, but they could occur if there were differential delays between signals reaching cortex. Resolution of this controversy therefore requires that the contributions of cortical processing be isolated from the influence of subcortical activity. Here, we have done this by using binocular disparity changes, which are known to be detected via analyses that originate in cortex. We find that observers require longer stimulus exposures to detect small, relative to larger, disparity changes; observers are slower to react to smaller disparity changes and observers misperceive smaller disparity changes as being perceptually delayed. Interestingly, disparity magnitude influenced perceived timing more dramatically than it did stimulus change detection. Our data therefore suggest that perceived timing is both influenced by cortical processing and is shaped by sensory analyses subsequent to those that are minimally necessary for stimulus change perception

    Perceived size and spatial coding

    Get PDF
    Images of the same physical dimensions on the retina can appear to represent different-sized objects. One reason for this is that the human visual system can take viewing distance into account when judging apparent size. Sequentially presented images can also prompt spatial coding interactions. Here we show, using a spatial coding phenomenon (the tilt aftereffect) in tandem with viewing distance cues, that the tuning of such interactions is not simply determined by the physical dimensions of retinal input. Rather, we find that they are contingent on apparent size. Our data therefore reveal that spatial coding interactions in human vision are modulated by processes involved in the determination of apparent size

    Audio-Visual Speech Cue Combination

    Get PDF
    Background: Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Principal Findings: Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Conclusion: Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined

    Pre-Exposure to Moving Form Enhances Static Form Sensitivity

    Get PDF
    Background: Motion-defined form can seem to persist briefly after motion ceases, before seeming to gradually disappear into the background. Here we investigate if this subjective persistence reflects a signal capable of improving objective measures of sensitivity to static form. Methodology/Principal Findings: We presented a sinusoidal modulation of luminance, masked by a background noise pattern. The sinusoidal luminance modulation was usually subjectively invisible when static, but visible when moving. We found that drifting then stopping the waveform resulted in a transient subjective persistence of the waveform in the static display. Observers' objective sensitivity to the position of the static waveform was also improved after viewing moving waveforms, compared to viewing static waveforms for a matched duration. This facilitation did not occur simply because movement provided more perspectives of the waveform, since performance following pre-exposure to scrambled animations did not match that following pre-exposure to smooth motion. Observers did not simply remember waveform positions at motion offset, since removing the waveform before testing reduced performance. Conclusions/Significance: Motion processing therefore interacts with subsequent static visual inputs in a way that can improve performance in objective sensitivity measures. We suggest that the brief subjective persistence of motion-defined forms that can occur after motion offsets is a consequence of the decay of a static form signal that has been transiently enhanced by motion processing

    Suboptimal human multisensory cue combination

    Get PDF
    Information from different sensory modalities can interact, shaping what we think we have seen, heard, or otherwise perceived. Such interactions can enhance the precision of perceptual decisions, relative to those based on information from a single sensory modality. Several computational processes could account for such improvements. Slight improvements could arise if decisions are based on multiple independent sensory estimates, as opposed to just one. Still greater improvements could arise if initially independent estimates are summed to form a single integrated code. This hypothetical process has often been described as optimal when it results in bimodal performance consistent with a summation of unimodal estimates weighted in proportion to the precision of each initially independent sensory code. Here we examine cross-modal cue combination for audio-visual temporal rate and spatial location cues. While suggestive of a cross-modal encoding advantage, the degree of facilitation falls short of that predicted by a precision weighted summation process. These data accord with other published observations, and suggest that precision weighted combination is not a general property of human cross-modal perception

    Fear Conditioning to Subliminal Fear Relevant and Non Fear Relevant Stimuli

    Get PDF
    A growing body of evidence suggests that conscious visual awareness is not a prerequisite for human fear learning. For instance, humans can learn to be fearful of subliminal fear relevant images – images depicting stimuli thought to have been fear relevant in our evolutionary context, such as snakes, spiders, and angry human faces. Such stimuli could have a privileged status in relation to manipulations used to suppress usually salient images from awareness, possibly due to the existence of a designated sub-cortical ‘fear module’. Here we assess this proposition, and find it wanting. We use binocular masking to suppress awareness of images of snakes and wallabies (particularly cute, non-threatening marsupials). We find that subliminal presentations of both classes of image can induce differential fear conditioning. These data show that learning, as indexed by fear conditioning, is neither contingent on conscious visual awareness nor on subliminal conditional stimuli being fear relevant
    corecore