85 research outputs found

    Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    Get PDF
    yesWe describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg 1 at 18 eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results

    When a photograph can be heard: Vision activates the auditory cortex within 110 ms

    Get PDF
    As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after ∼ 200 ms

    The effects of stereo disparity on the behavioural and electrophysiological correlates of audio-visual motion in depth.

    Get PDF
    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135 – 160 ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140 – 200 ms, 220 – 280 ms, and 350 – 500 ms after stimulus onset

    The effect of long-term unilateral deafness on the activation pattern in the auditory cortices of French-native speakers: influence of deafness side

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In normal-hearing subjects, monaural stimulation produces a normal pattern of asynchrony and asymmetry over the auditory cortices in favour of the contralateral temporal lobe. While late onset unilateral deafness has been reported to change this pattern, the exact influence of the side of deafness on central auditory plasticity still remains unclear. The present study aimed at assessing whether left-sided and right-sided deafness had differential effects on the characteristics of neurophysiological responses over auditory areas. Eighteen unilaterally deaf and 16 normal hearing right-handed subjects participated. All unilaterally deaf subjects had post-lingual deafness. Long latency auditory evoked potentials (late-AEPs) were elicited by two types of stimuli, non-speech (1 kHz tone-burst) and speech-sounds (voiceless syllable/pa/) delivered to the intact ear at 50 dB SL. The latencies and amplitudes of the early exogenous components (N100 and P150) were measured using temporal scalp electrodes.</p> <p>Results</p> <p>Subjects with left-sided deafness showed major neurophysiological changes, in the form of a more symmetrical activation pattern over auditory areas in response to non-speech sound and even a significant reversal of the activation pattern in favour of the cortex ipsilateral to the stimulation in response to speech sound. This was observed not only for AEP amplitudes but also for AEP time course. In contrast, no significant changes were reported for late-AEP responses in subjects with right-sided deafness.</p> <p>Conclusion</p> <p>The results show that cortical reorganization induced by unilateral deafness mainly occurs in subjects with left-sided deafness. This suggests that anatomical and functional plastic changes are more likely to occur in the right than in the left auditory cortex. The possible perceptual correlates of such neurophysiological changes are discussed.</p

    Neural correlates of audiovisual motion capture

    Get PDF
    Visual motion can affect the perceived direction of auditory motion (i.e., audiovisual motion capture). It is debated, though, whether this effect occurs at perceptual or decisional stages. Here, we examined the neural consequences of audiovisual motion capture using the mismatch negativity (MMN), an event-related brain potential reflecting pre-attentive auditory deviance detection. In an auditory-only condition occasional changes in the direction of a moving sound (deviant) elicited an MMN starting around 150 ms. In an audiovisual condition, auditory standards and deviants were synchronized with a visual stimulus that moved in the same direction as the auditory standards. These audiovisual deviants did not evoke an MMN, indicating that visual motion reduced the perceptual difference between sound motion of standards and deviants. The inhibition of the MMN by visual motion provides evidence that auditory and visual motion signals are integrated at early sensory processing stages

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    The Timing of the Cognitive Cycle

    Get PDF
    We propose that human cognition consists of cascading cycles of recurring brain events. Each cognitive cycle senses the current situation, interprets it with reference to ongoing goals, and then selects an internal or external action in response. While most aspects of the cognitive cycle are unconscious, each cycle also yields a momentary “ignition” of conscious broadcasting. Neuroscientists have independently proposed ideas similar to the cognitive cycle, the fundamental hypothesis of the LIDA model of cognition. High-level cognition, such as deliberation, planning, etc., is typically enabled by multiple cognitive cycles. In this paper we describe a timing model LIDA's cognitive cycle. Based on empirical and simulation data we propose that an initial phase of perception (stimulus recognition) occurs 80–100 ms from stimulus onset under optimal conditions. It is followed by a conscious episode (broadcast) 200–280 ms after stimulus onset, and an action selection phase 60–110 ms from the start of the conscious phase. One cognitive cycle would therefore take 260–390 ms. The LIDA timing model is consistent with brain evidence indicating a fundamental role for a theta-gamma wave, spreading forward from sensory cortices to rostral corticothalamic regions. This posteriofrontal theta-gamma wave may be experienced as a conscious perceptual event starting at 200–280 ms post stimulus. The action selection component of the cycle is proposed to involve frontal, striatal and cerebellar regions. Thus the cycle is inherently recurrent, as the anatomy of the thalamocortical system suggests. The LIDA model fits a large body of cognitive and neuroscientific evidence. Finally, we describe two LIDA-based software agents: the LIDA Reaction Time agent that simulates human performance in a simple reaction time task, and the LIDA Allport agent which models phenomenal simultaneity within timeframes comparable to human subjects. While there are many models of reaction time performance, these results fall naturally out of a biologically and computationally plausible cognitive architecture
    corecore