2,451 research outputs found

    The Effects of Cued Speech on Phonemic Awareness Skills

    Get PDF
    Research suggests phonemic awareness is enhanced through multimodality training. Cued Speech is a multimodality system that combines hand signs with mouth movements to represent phonemes of the spoken language. This system has been utilized successfully in developing phonological awareness with children with hearing loss. However, no research is available on its effectiveness with children who are not deaf or hard-of-hearing. The efficacy of the use of Cued Speech for the enhancement of phonological skills in typically developing 1st grade students was evaluated in this study. Twenty-six 1st graders identified as low-achieving readers by their classroom teachers were administered the PPVT-4 to match participants across three assigned research groups: no intervention (NI), phonemic awareness training auditory only (AO), or phonemic awareness training with Cued Speech (CS). Pre- and post-test scores were compared on six different skills from the Phonological Awareness Test 2 (PAT-2). Results indicated that the Cued Speech intervention group made more gains based on Phonological Awareness Test 2 pre- and post-intervention scores. Although no statistical significance was found when all three groups\u27 post-intervention scores were compared, the CS group did show significant gains across its participants

    The Effects of Cued Speech on Phonemic Awareness Skills

    Get PDF
    Research suggests phonemic awareness is enhanced through multimodality training. Cued Speech is a multimodality system that combines hand signs with mouth movements to represent phonemes of the spoken language. This system has been utilized successfully in developing phonological awareness with children with hearing loss. However, no research is available on its effectiveness with children who are not deaf or hard-of-hearing. The efficacy of the use of Cued Speech for the enhancement of phonological skills in typically developing 1st grade students was evaluated in this study. Twenty-six 1st graders identified as low-achieving readers by their classroom teachers were administered the PPVT-4 to match participants across three assigned research groups: no intervention (NI), phonemic awareness training auditory only (AO), or phonemic awareness training with Cued Speech (CS). Pre- and post-test scores were compared on six different skills from the Phonological Awareness Test 2 (PAT-2). Results indicated that the Cued Speech intervention group made more gains based on Phonological Awareness Test 2 pre- and post-intervention scores. Although no statistical significance was found when all three groups\u27 post-intervention scores were compared, the CS group did show significant gains across its participants

    Lip movements and lexical features improve speech tracking differently for clear and multi-speaker speech

    Get PDF
    Visual speech plays a powerful role in facilitating auditory speech processing and has been a publicly noticed topic with the wide usage of face masks during the Covid-19 pandemic. In a previous magnetoencephalography (MEG) study we showed that occluding the mouth area significantly impairs neural speech tracking. To rule out the possibility that this deterioration is due to degraded sound quality, in the present follow-up study, we presented participants with audiovisual (AV) and audio-only (A) speech. We further independently manipulated the trials by adding a face mask and a distractor speaker. Our results clearly show that face masks only affect speech tracking in AV conditions, not in A conditions. This shows that face masks indeed primarily impact speech processing by blocking visual speech and not by acoustic degradation. Furthermore, we observe differences in the speech features that are used for visual speech processing. On the one hand, processing in clear speech, but not in noisy speech, is profiting more from lexical unit features (phonemes and word onsets) hinting at improved phoneme discrimination. On the other hand, we observe an improvement in speech tracking driven by the modulations of the lip area in clear speech and conditions with a distractor speaker, which might aid by providing temporal cues for subsequent auditory processing. With this work, we highlight the effects of face masks in AV speech tracking and show two separate ways how visual speech might support successful speech processing

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Neural Speech Tracking Highlights the Importance of Visual Speech in Multi-speaker Situations

    Get PDF
    Visual speech plays a powerful role in facilitating auditory speech processing and has been a publicly noticed topic with the wide usage of face masks during the COVID-19 pandemic. In a previous magnetoencephalography study, we showed that occluding the mouth area significantly impairs neural speech tracking. To rule out the possibility that this deterioration is because of degraded sound quality, in the present follow-up study, we presented participants with audiovisual (AV) and audio-only (A) speech. We further independently manipulated the trials by adding a face mask and a distractor speaker. Our results clearly show that face masks only affect speech tracking in AV conditions, not in A conditions. This shows that face masks indeed primarily impact speech processing by blocking visual speech and not by acoustic degradation. We can further highlight how the spectrogram, lip movements and lexical units are tracked on a sensor level. We can show visual benefits for tracking the spectrogram especially in the multi-speaker condition. While lip movements only show additional improvement and visual benefit over tracking of the spectrogram in clear speech conditions, lexical units (phonemes and word onsets) do not show visual enhancement at all. We hypothesize that in young normal hearing individuals, information from visual input is less used for specific feature extraction, but acts more as a general resource for guiding attention

    Cerebral correlates and statistical criteria of cross-modal face and voice integration

    Get PDF
    Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define po- tential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face–voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face–voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face– voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with ma- nipulation of stimulus signal-to-noise ratio and/or cross-modal congruency
    corecore