126 research outputs found

    A FAUST Tutorial

    Get PDF
    manualThis document presents the language FAUST, its syntax and grammar as well as several commented examples. FAUST (the name stands for Functional Audio Streams) is a programming language specifically designed to develop and implement efficient real time digital signal processors

    Measure and model of vocal-tract length discrimination in cochlear implants

    Get PDF
    Voice discrimination is crucial to selectively listen to a particular talker in a crowded environment. In normalhearing listeners, it strongly relies on the perception of two dimensions: the fundamental frequency and the vocal-tract length. Yet, very little is known about the perception of the latter in cochlear implants. The present study reports discrimination thresholds for vocal-tract length in normal-hearing listeners and cochlear-implant users. The behavioral results were then used to determine the effective spectral resolution in a model of electric hearing: effective resolution in the implant was found to be poorer than previously suggested by psychophysical measurements. Such a model could be used for clinical purposes, or to facilitate the development of new strategies.</p

    Temporal Regularity Detection and Rate Discrimination in Cochlear-Implant Listeners

    Get PDF
    Cochlear implants (CIs) convey fundamental-frequency information using primarily temporal cues. However, temporal pitch perception in CI users is weak and, when measured using rate discrimination tasks, deteriorates markedly as the rate increases beyond 300 pulses-per-second. Rate pitch may be weak because the electrical stimulation of the surviving neural population of the implant recipient may not allow accurate coding of inter-pulse time intervals. If so, this phenomenon should prevent listeners from detecting when a pulse train is physically temporally jittered. Performance in a jitter detection task was compared to that in a rate-pitch discrimination task. Stimuli were delivered using direct stimulation in cochlear implants, on a mid-array and an apical electrode, and at two different rates (100 and 300 pps). Average performance on both tasks was worse at the higher pulse rate and did not depend on electrode. However, there was a large variability across and within listeners that did not correlate between the two tasks, suggesting that rate-pitch judgement and regularity detection are to some extent limited by task-specific processes. Simulations with filtered pulse trains presented to NH listeners yielded broadly similar results, except that, for the rate discrimination task, the difference between performance with 100- and 300-pps base rates was smaller than observed for CI users.</p

    Vocal emotion recognition in school-age children:Normative data for the EmoHI test

    Get PDF
    Traditionally, emotion recognition research has primarily used pictures and videos while audio test materials have received less attention and are not always readily available. Particularly for testing vocal emotion recognition in hearing-impaired listeners, the audio quality of assessment materials may becrucial. Here, we present a vocal emotion recognition test with non-language specific pseudospeech productions of multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. Recorded with high sound quality, the test is suitable to use with populations of children and adults with normal or impaired hearing, and across different languages. In the present study, we obtained normative data for vocal emotion recognition development in normal-hearing school-age (4-12 years) children using the EmoHI test. In addition, we tested Dutch and English children to investigate cross-language effects. Our results show that children’s emotion recognition accuracy scores improved significantly with age from the youngest group tested on (mean accuracy 4-6 years: 48.9%), but children’s performance did not reach adult-like values (mean accuracy adults: 94.1%) even for the oldest age group tested (mean accuracy 10-12 years: 81.1%). Furthermore, the effect of age on children’s development did not differ across languages. The strong but slow development in children’s ability to recognize vocal emotions emphasizes the role of auditory experience in forming robust representations of vocal emotions. The wide range of age-related performances that are captured and the lack of significant differences across the tested languages affirm the usability and versatility of the EmoHI test

    Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    Get PDF
    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradation

    Development of vocal emotion recognition in school-age children: The EmoHI test for hearing-impaired populations.

    Get PDF
    Traditionally, emotion recognition research has primarily used pictures and videos, while audio test materials are not always readily available or are not of good quality, which may be particularly important for studies with hearing-impaired listeners. Here we present a vocal emotion recognition test with pseudospeech productions from multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. The high sound quality recordings make the test suitable for use with populations of children and adults with normal or impaired hearing. Here we present normative data for vocal emotion recognition development in normal-hearing (NH) school-age children using the EmoHI test. Furthermore, we investigated cross-language effects by testing NH Dutch and English children, and the suitability of the EmoHI test for hearing-impaired populations, specifically for prelingually deaf Dutch children with cochlear implants (CIs). Our results show that NH children's performance improved significantly with age from the youngest age group onwards (4-6 years: 48.9%, on average). However, NH children's performance did not reach adult-like values (adults: 94.1%) even for the oldest age group tested (10-12 years: 81.1%). Additionally, the effect of age on NH children's development did not differ across languages. All except one CI child performed at or above chance-level showing the suitability of the EmoHI test. In addition, seven out of 14 CI children performed within the NH age-appropriate range, and nine out of 14 CI children did so when performance was adjusted for hearing age, measured from their age at CI implantation. However, CI children showed great variability in their performance, ranging from ceiling (97.2%) to below chance-level performance (27.8%), which could not be explained by chronological age alone. The strong and consistent development in performance with age, the lack of significant differences across the tested languages for NH children, and the above-chance performance of most CI children affirm the usability and versatility of the EmoHI test

    Effect of F0 contours on top-down repair of interrupted speech

    Get PDF
    Top-down repair of interrupted speech can be influenced by bottom-up acoustic cues such as voice pitch (F0). This study aims to investigate the role of the dynamic information of pitch, i.e., F0 contours, in top-down repair of speech. Intelligibility of sentences interrupted with silence or noise was measured in five F0 contour conditions (inverted, flat, original, exaggerated with a factor of 1.5 and 1.75). The main hypothesis was that manipulating F0 contours would impair linking successive segments of interrupted speech and thus negatively affect top-down repair. Intelligibility of interrupted speech was impaired only by misleading dynamic information (inverted F0 contours). The top-down repair of interrupted speech was not affected by any F0 contours manipulation. (C) 2017 Acoustical Society of Americ

    Preface

    Get PDF
    ​The International Symposium on Hearing is a prestigious, triennial gathering where world-class scientists present and discuss the most recent advances in the field of human and animal hearing research. The 2015 edition will particularly focus on integrative approaches linking physiological, psychophysical and cognitive aspects of normal and impaired hearing. Like previous editions, the proceedings will contain about 50 chapters ranging from basic to applied research, and of interest to neuroscientists, psychologists, audiologists, engineers, otolaryngologists, and artificial intelligence researchers.
    • …
    corecore