77,632 research outputs found

    Seeing a talking face matters to infants, children and adults : behavioural and neurophysiological studies

    Get PDF
    Everyday conversations typically occur face-to-face. Over and above auditory information, visual information from a speaker’s face, e.g., lips, eyebrows, contributes to speech perception and comprehension. The facilitation that visual speech cues bring— termed the visual speech benefit—are experienced by infants, children and adults. Even so, studies on speech perception have largely focused on auditory-only speech leaving a relative paucity of research on the visual speech benefit. Central to this thesis are the behavioural and neurophysiological manifestations of the visual speech benefit. As the visual speech benefit assumes that a listener is attending to a speaker’s talking face, the investigations are conducted in relation to the possible modulating effects that gaze behaviour brings. Three investigations were conducted. Collectively, these studies demonstrate that visual speech information facilitates speech perception, and this has implications for individuals who do not have clear access to the auditory speech signal. The results, for instance the enhancement of 5-month-olds’ cortical tracking by visual speech cues, and the effect of idiosyncratic differences in gaze behaviour on speech processing, expand knowledge of auditory-visual speech processing, and provide firm bases for new directions in this burgeoning and important area of research

    THE EFFECT OF SIMULTANEOUS, IRRELEVANT AUDITORY AND VISUAL STIMULI ON A FORCED-ATTENTION DICHOTIC LISTENING TEST

    Get PDF
    Many of the studies examining cognitive control during selective attention across different sensory modalities conflict. This study was designed to study the effect of an irrelevant visual stimulus and an auditory distraction of backward speech on a forced attention dichotic listening test. I predicted that the visual stimulus and backward speech would not have a significant effect on the ear advantage. The results showed that all subjects were able to force their attention to the ear regardless of the visual or auditory distracters. In addition, I found that an irrelevant visual stimulus affects auditory attention more so in the left visual field than the right visual field. This proves that top-down processing can override bottom-processing and auditory tasks demanding full processing capacity limit the processing of the irrelevant visual stimulus

    Neural pathways for visual speech perception

    Get PDF
    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA

    Auditory perception in the ageing brain:the role of inhibition and facilitation in early processing

    Get PDF
    AbstractAging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing

    Enhanced audiovisual integration with aging in speech perception: a heightened McGurk effect in older adults

    Get PDF
    Two experiments compared young and older adults in order to examine whether aging leads to a larger dependence on visual articulatory movements in auditory-visual speech perception. These experiments examined accuracy and response time in syllable identification for auditory-visual (AV) congruent and incongruent stimuli. There were also auditory-only (AO) and visual-only (VO) presentation modes. Data were analyzed only for participants with normal hearing. It was found that the older adults were more strongly influenced by visual speech than the younger ones for acoustically identical signal-to-noise ratios (SNRs) of auditory speech (Experiment 1). This was also confirmed when the SNRs of auditory speech were calibrated for the equivalent AO accuracy between the two age groups (Experiment 2). There were no aging-related differences in VO lipreading accuracy. Combined with response time data, this enhanced visual influence for the older adults was likely to be associated with an aging-related delay in auditory processing

    Do you see what I mean? The role of visual speech information in lexical representations

    Get PDF
    Human speech is necessarily multimodal and audiovisual redundancies in speech may play a vital role in speech perception across the lifespan. The majority of previous studies have focused particularly on how language is learned from auditory input, but the way in which audiovisual speech information is perceived and comprehended remains less well understood. Here, I examine how audiovisual and visual-only speech information is represented for known words, and if intersensory processing efficiency ability predicts the strength of the lexical representation. To explore the relationship between intersensory processing ability (indexed by matching temporally synchronous auditory and visual stimulation) and the strength of lexical representations, adult subjects participated in an audiovisual word recognition task and the Intersensory Processing Efficiency Protocol (IPEP). Participants were able to reliably identify a correct referent object across manipulations of modality (audiovisual vs visual-only) and pronunciation (correctly vs mispronounced). Correlational analyses did not reveal any relationship between processing efficiency and visual speech information in lexical representations. However, the results presented here suggest that adults’ lexical representations robustly include visual speech information and that visual speech information is sublexically processed during speech perception

    The Effect of Visual Cues Provided by Computerized Aphasia Treatment

    Get PDF
    Individuals with chronic aphasia and apraxia can benefit from computerized treatment. However, given the variability among those individuals, treatment programs need to be customized to address specific deficits and needs. The current study examined whether visual cues provided by a computer program could enhance speech comprehension and verbal expression. Two participants practiced naming functionally relevant items in two conditions: auditory-visual and auditory-only. Both participants made more rapid and consistent improvements in the auditory-visual than in the auditory-only cueing condition. More research is necessary to investigate how the visual processing skills affect the ability to utilize visual cues for speech practice

    The Benefit of Cross-Modal Reorganization on Speech Perception in Pediatric Cochlear Implant Recipients Revealed Using Functional Near-Infrared Spectroscopy

    Get PDF
    Cochlear implants (CIs) are the most successful treatment for severe-to-profound deafness in children. However, speech outcomes with a CI often lag behind those of normally-hearing children. Some authors have attributed these deficits to the takeover of the auditory temporal cortex by vision following deafness, which has prompted some clinicians to discourage the rehabilitation of pediatric CI recipients using visual speech. We studied this cross-modal activity in the temporal cortex, along with responses to auditory speech and non-speech stimuli, in experienced CI users and normally-hearing controls of school-age, using functional near-infrared spectroscopy. Strikingly, CI users displayed significantly greater cortical responses to visual speech, compared with controls. Importantly, in the same regions, the processing of auditory speech, compared with non-speech stimuli, did not significantly differ between the groups. This suggests that visual and auditory speech are processed synergistically in the temporal cortex of children with CIs, and they should be encouraged, rather than discouraged, to use visual speech

    Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention

    Get PDF
    Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.Peer reviewe

    Examining the McGurk illusion using high-field 7 Tesla functional MRI

    Get PDF
    In natural communication speech perception is profoundly influenced by observable mouth movements. The additional visual information can greatly facilitate intelligibility but incongruent visual information may also lead to novel percepts that neither match the auditory nor the visual information as evidenced by the McGurk effect. Recent models of audiovisual (AV) speech perception accentuate the role of speech motor areas and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for speech perception. In this event-related 7 Tesla fMRI study we used three naturally spoken syllable pairs with matching AV information and one syllable pair designed to elicit the McGurk illusion. The data analysis focused on brain sites involved in processing and fusing of AV speech and engaged in the analysis of auditory and visual differences within AV presented speech. Successful fusion of AV speech is related to activity within the STS of both hemispheres. Our data supports and extends the audio-visual-motor model of speech perception by dissociating areas involved in perceptual fusion from areas more generally related to the processing of AV incongruence
    • 

    corecore