1,343 research outputs found

    Differences in Processing Speech-on-Speech Between Musicians and Non-musicians: The Role of Prosodic Cues.

    Get PDF
    In the current study, we investigate the role of prosodic cues in speech-on-speech perception in musicians and non-musicians. Earlier studies have shown that musically experienced listeners may have an advantage in speech-on-speech performance in behavioral tasks (1,2). Previously, we have also shown in an eye-tracking study that musical experience has an effect on the timing of resolution of lexical competition when processing quiet vs masked speech (3). In particular, musicians were faster in lexical decision-making when a two-talker masker was added to target speech. However, the source of the difference observed between groups remained unclear. In the current study, by employing a visual world paradigm, we aim to clarify whether musicians make use of durational cues that contribute to prosodic boundaries in Dutch, in resolving lexical competition when processing quiet vs two-talker masked speech. If musical training preserves listeners' sensitivity to the acoustic correlates of prosodic boundaries when processing masked speech, we expect to observe more lexical competition and delayed lexical resolution in musicians. We will compare gaze-tracking and pupil data of both groups across conditions

    Pupillary correlates of auditory emotion recognition in older hearing-impaired listeners

    Get PDF
    Hearing-impaired (HI) individuals are shown to perform worse in auditory emotion recognition taskscompared to normal hearing individuals. It is still unclear if this is due to processing at low auditory levels or to categorisation of emotions that are involved in an experimental task(1). An index of emotion recognition can be observed in pupil dilations, which have recently been shown to dilate more for emotionally meaningful speech in comparison to emotionally neutral speech(2). We fitted 8 older HI participants, who had moderate to severe sloping high-frequency hearing loss, with frequency loweringenabled hearing aids for an acclimatisation period of 3-6 weeks. We recorded their pupil dilations in response to emotional speech with and without frequency lowering, during a passive-listening condition, both before and after the acclimatisation period.We also recorded their pupil dilations during an active-listening condition, which included a behavioural emotion identification task, after the acclimatisation period. We present here insights into the pupillary correlates of vocal emotion recognition inthe HI population and the impact of frequency lowering and the cognitive involvement elicited by the experimental situation on pupil dilation and emotion recognition capabilities in this population. (This project has received funding from the European Union’s H2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 675324

    Are musicians at an advantage when processing speech on speech?

    Get PDF

    The impact of speech type on listening effort and intelligibility for native and non-native listeners

    Get PDF
    Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort.</p

    The impact of speech type on listening effort and intelligibility for native and non-native listeners

    Get PDF
    Listeners are routinely exposed to many different types of speech, including artificially-enhanced and synthetic speech, styles which deviate to a greater or lesser extent from naturally-spoken exemplars. While the impact of differing speech types on intelligibility is well-studied, it is less clear how such types affect cognitive processing demands, and in particular whether those speech forms with the greatest intelligibility in noise have a commensurately lower listening effort. The current study measured intelligibility, self-reported listening effort, and a pupillometry-based measure of cognitive load for four distinct types of speech: (i) plain i.e. natural unmodified speech; (ii) Lombard speech, a naturally-enhanced form which occurs when speaking in the presence of noise; (iii) artificially-enhanced speech which involves spectral shaping and dynamic range compression; and (iv) speech synthesized from text. In the first experiment a cohort of 26 native listeners responded to the four speech types in three levels of speech-shaped noise. In a second experiment, 31 non-native listeners underwent the same procedure at more favorable signal-to-noise ratios, chosen since second language listening in noise has a more detrimental effect on intelligibility than listening in a first language. For both native and non-native listeners, artificially-enhanced speech was the most intelligible and led to the lowest subjective effort ratings, while the reverse was true for synthetic speech. However, pupil data suggested that Lombard speech elicited the lowest processing demands overall. These outcomes indicate that the relationship between intelligibility and cognitive processing demands is not a simple inverse, but is mediated by speech type. The findings of the current study motivate the search for speech modification algorithms that are optimized for both intelligibility and listening effort.</p

    Are musicians at an advantage when processing speech on speech?

    Get PDF
    Several studies have shown that musicians may have an advantage in a variety of auditory tasks, including speech in noise perception. The current study explores whether musical training enhances understanding two-talker masked speech. By combining an off-line and an on-line measure of speech perception, we investigated how automatic processes can contribute to the potential perceptual advantage of musicians. Understanding the underlying mechanisms for how musical training may lead to a benefit in speech-in-noise perception could help clinicians in developing ways to use music as a means to improve speech processing of hearing impaired individuals

    Charakterisierung der mitochondrialen Oxygenasen AlkBh1 und AlkBh7

    Get PDF

    COMPARATIVE KINEMATIC ANALYSIS OF SIMILARITIES AND DIFFERENCES IN SERVE AND GROUNDSTROKES BETWEEN WHEELCHAIR AND STANDING TENNIS PLAYERS

    Get PDF
    The purpose of this study was to clarify differences in the shoulder range of motion (ROM) between wheelchair users (WU), wheelchair (WTP) and standing tennis players (TP) and differences in shoulder angles between wheelchair and standing tennis players during tennis groundstrokes and serve. 4 wheelchair users, 5 wheelchair tennis players, and 4 able-bodied tennis players served as subjects. For analysis of shoulder range of motion (ROM), internal, external and total ROM data of all subjects were collected. The tennis movements serve, backhand slice, and forehand topspin were recorded of WTP and TP. Kinematic data were collected by eight cameras and included shoulder angles of the dominant arm: anteversion, abduction and internal rotation. Statistical analysis of kinematic data showed significant differences during almost all movements between WTP and TP. Different shoulder angles and ROM might be important findings for shoulder pathology and wheelchair tennis training skills, especially flexibility training
    • …
    corecore