3,049 research outputs found

    Noise, age and gender effects on speech intelligibility and sentence comprehension for 11- to 13-year-old children in real classrooms.

    Get PDF
    The present study aimed to investigate the effects of type of noise, age, and gender on children\u2019s speech intelligibility (SI) and sentence comprehension (SC). The experiment was conducted with 171 children between 11 and 13 years old in ecologically-valid conditions (collective presentation in real, reverberating classrooms). Two standardized tests were used to assess SI and SC. The two tasks were presented in three listening conditions: quiet; traffic noise; and classroom noise (non-intelligible noise with the same spectrum and temporal envelope of speech, plus typical classroom sound events). Both task performance accuracy and listening effort were considered in the analyses, the latter tracked by recording the response time (RT) using a single-task paradigm. Classroom noise was found to have the worst effect on both tasks (worsening task performance accuracy and slowing RTs), due to its spectro-temporal characteristics. A developmental effect was seen in the range of ages (11\u201313 years), which depended on the task and listening condition. Gender effects were also seen in both tasks, girls being more accurate and quicker to respond in most listening conditions. A significant interaction emerged between type of noise, age and task, indicating that classroom noise had a greater impact on RTs for SI than for SC. Overall, these results indicate that, for 11- to 13-year-old children, performance in SI and SC tasks is influenced by aspects relating to both the sound environment and the listener (age, gender). The presence of significant interactions between these factors and the type of task suggests that the acoustic conditions that guarantee optimal SI might not be equally adequate for SC. Our findings have implications for the development of standard requirements for the acoustic design of classrooms

    Keeping an eye on gestures: Visual perception of gestures in face-to-face communication

    Get PDF
    Since listeners usually look at the speaker's face, gestural information has to be absorbed through peripheral visual perception. In the literature, it has been suggested that listeners look at gestures under certain circumstances: 1) when the articulation of the gesture is peripheral; 2) when the speech channel is insufficient for comprehension; and 3) when the speaker him- or herself indicates that the gesture is worthy of attention. The research here reported employs eye tracking techniques to study the perception of gestures in face-to-face interaction. The improved control over the listener's visual channel allows us to test the validity of the above claims. We present preliminary findings substantiating claims 1 and 3, and relate them to theoretical proposals in the literature and to the issue of how visual and cognitive attention are related

    The Effect of Age-related Declines in Inhibitory Control on Audiovisual Speech Intelligibility

    Get PDF
    Audiovisual: AV) speech perception is perception in which both auditory and visual information is available in order to understand a talker, compared to an auditory signal alone, during face-to-face communication. This form of communication yields significantly higher word recognition performance as compared to either sensory modality alone, constituting a general AV advantage for speech perception. Despite an overall AV advantage, older adults seem to receive less benefit from this bimodal presentation than do younger adults. However, there is evidence to suggest that not all age-related deficits in AV speech perception are of a sensory nature, but are also influenced by cognitive factors: e.g. Pichora-Fuller et al., 1995). In the current study, I extend an existing model of spoken-word recognition to the AV domain and refer to the new model as the Auditory-Visual Neighborhood Activation Model: AV-NAM). The primary goal of the current study was to examine the cognitive factors that contribute to age-related and individual differences in AV perception of words varying in lexical density: i.e. easy and hard words). Forty-nine younger and 50 older adults completed a series of cognitive inhibition tasks and several spoken word identification tasks. The words were presented in auditory-only, visual-only, and AV conditions. Overall, younger adults demonstrated better inhibitory abilities and higher word identification performance than older adults. However, whereas no relationship was observed between inhibitory measures and word identification performance in younger adults, there was a significant relationship between inhibition, as measured by Stroop interference, and intelligibility of lexically difficult words in older adults. These results are interpreted within the framework of the newly adapted AV-NAM and the implications for inhibitory deficits in older adults that contribute to impairments in speech perception

    Oromotor Kinematics of Speech In Children and the Effect of an External Rhythmic Auditory Stimulus

    Get PDF
    The purpose of this study was to determine the effect of an external auditory rhythmic stimulus on the kinematics of the oromotor musculature during speech production in children and adults. To this effect, the research questions were: 1) Do children entrain labiomandibular movements to an external auditory stimulus? 2) Does the ability to entrain labiomandibular movements to an external auditory stimulus change with age? 3) Does an external auditory stimulus change the coordination and stability of the upper lip, lower lip, and jaw when producing speech sounds? The oromotor kinematics of two groups of children, age eight to ten (n = 6) and eleven to fourteen (n = 6), were compared to the oromotor kinematics of adults (n = 12) while producing bilabial syllables with and without an external auditory stimulus. The kinematic correlates of speech production were recorded using video-based 4-dimensional motion capture technology and included measures of upper lip, lower lip and jaw displacement and their respective derivatives. The Spatiotemporal Index (a single number indication of motor stability and pattern formation) and Synchronization Error (a numerical indication of phase deviations) were calculated for each participant within each condition. There were no statistically significant differences between age groups for the Spatiotemporal Index or for Synchronization Error. Results indicated that there were statistically significant differences in the Spatiotemporal Index for condition; with Post-hoc tests indicating that the difference was between the first condition (no rhythm) and the second condition (self-paced rhythm). Results indicated that both child groups were able to synchronize to an external auditory stimulus. Furthermore, the older child group was able to establish oromotor synchrony with near-adult abilities

    Audio-Visual Speech Enhancement Based on Deep Learning

    Get PDF

    The Human Auditory System

    Get PDF
    This book presents the latest findings in clinical audiology with a strong emphasis on new emerging technologies that facilitate and optimize a better assessment of the patient. The book has been edited with a strong educational perspective (all chapters include an introduction to their corresponding topic and a glossary of terms). The book contains material suitable for graduate students in audiology, ENT, hearing science and neuroscience

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    • …
    corecore