88 research outputs found

    A new speech corpus of super-elderly Japanese for acoustic modeling

    Get PDF
    The development of accessible speech recognition technology will allow the elderly to more easily access electronically stored information. However, the necessary level of recognition accuracy for elderly speech has not yet been achieved using conventional speech recognition systems, due to the unique features of the speech of elderly people. To address this problem, we have created a new speech corpus named EARS (Elderly Adults Read Speech), consisting of the recorded read speech of 123 super-elderly Japanese people (average age: 83.1), as a resource for training automated speech recognition models for the elderly. In this study, we investigated the acoustic features of super-elderly Japanese speech using our new speech corpus. In comparison to the speech of less elderly Japanese speakers, we observed a slower speech rate and extended vowel duration for both genders, a slight increase in fundamental frequency for males, and a slight decrease in fundamental frequency for females. To demonstrate the efficacy of our corpus, we also conducted speech recognition experiments using two different acoustic models (DNN-HMM and transformer-based), trained with a combination of data from our corpus and speech data from three conventional Japanese speech corpora. When using the DNN-HMM trained with EARS and speech data from existing corpora, the character error rate (CER) was reduced by 7.8% (to just over 9%), compared to a CER of 16.9% when using only the baseline training corpora. We also investigated the effect of training the models with various amounts of EARS data, using a simple data expansion method. The acoustic models were also trained for various numbers of epochs without any modifications. When using the Transformer-based end-to-end speech recognizer, the character error rate was reduced by 3.0% (to 11.4%) by using a doubled EARS corpus with the baseline data for training, compared to a CER of 13.4% when only data from the baseline training corpora were used

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy. This edition celebrates twenty years of uninterrupted and succesfully research in the field of voice analysis

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Measuring voluntary cough and its relationship to the perception of voice

    Get PDF
    Cough is a motor act of the laryngeal and respiratory systems. Features of coughing have been considered in the examination of respiratory, swallowing and voice disorders. Although some voice disorders have been linked to excessive coughing, the precise relationship between cough and voice remains unknown. The present study examined the acoustic features of cough across sex and age; and its relationship to the perception of voice production. A total of 30 cough samples and 30 voice samples were collected from 15 healthy females and 15 healthy males; ranging from young age (17-25 years old), middle-aged (30-45 years old) and older-age (60 years old & above). Coughs containing three distinct phases were submitted to an acoustic analysis of the long-term average spectrum (LTAS) and cough duration. Both cough and voice samples were examined perceptually by a group of 20 speech-language pathologists. Results found a distinct three-phase pattern of cough that was remarkably stable across sex and age. Significant differences were found in the duration of each phase of cough. Perception of cough was not significantly related to acoustic features of cough. Perceptual judgment of sex was comparable for both cough and voice samples. However, the accuracy of age recognition was higher for voice samples compared to cough samples. In addition, voice was judged to be healthier and stronger than cough. Overall, the results partially support the previous acoustic findings on cough. A strong relationship between the acoustics of cough and the perception of cough was not evident. Listeners judged voice differently from cough, except for sex recognition. The clinical implications of the findings are discussed

    The relationships among physiological, acoustical, and perceptual measures of vocal effort

    Full text link
    The purpose of this work was to explore the physiological mechanisms of vocal effort, the acoustical manifestation of vocal effort, and the perceptual interpretation of vocal effort by speakers and listeners. The first study evaluated four proposed mechanisms of vocal effort specific to the larynx: intrinsic laryngeal tension, extrinsic laryngeal tension, supraglottal compression, and subglottal pressure. Twenty-six healthy adults produced modulations of vocal effort (mild, moderate, maximal) and rate (slow, typical, fast), followed by self-ratings of vocal effort on a visual analog scale. Ten physiological measures across the four hypothesized mechanisms were captured via high-speed flexible laryngoscopy, surface electromyography, and neck-surface accelerometry. A mixed-effects backward stepwise regression analysis revealed that estimated subglottal pressure, mediolateral supraglottal compression, and a normalized percent activation of extrinsic suprahyoid muscles significantly increased as ratings of vocal effort increased (R2 = .60). The second study had twenty inexperienced listeners rate vocal effort on the speech recordings from the first study (typical, mild, moderate, and maximal effort) via a visual sort-and-rate method. A set of acoustical measures were calculated, including amplitude-, time-, spectral-, and cepstral-based measures. Two separate mixed-effects regression models determined the relationship between the acoustical predictors and speaker and listener ratings. Results indicated that mean sound pressure level, low-to-high spectral ratio, and harmonic-to-noise ratio significantly predicted speaker and listener ratings. Mean fundamental frequency (measured as change in semitones from typical productions) and relative fundamental frequency offset cycle 10 were also significant predictors of listener ratings. The acoustical predictors accounted for 72% and 82% of the variance in speaker and listener ratings, respectively. Speaker and listener ratings were also highly correlated (average r = .86). From these two studies, we determined that vocal effort is a complex physiological process that is mediated by changes in laryngeal configuration and subglottal pressure. The self-perception of vocal effort is related to the acoustical properties underlying these physiological changes. Listeners appear to rely on the same acoustical manifestations as speakers, yet incorporate additional time-based acoustical cues during perceptual judgments. Future work should explore the physiological, acoustical, and perceptual measures identified here in speakers with voice disorders.2019-07-06T00:00:00

    Acoustic and videoendoscopic techniques to improve voice assessment via relative fundamental frequency

    Get PDF
    Quantitative measures of laryngeal muscle tension are needed to improve assessment and track clinical progress. Although relative fundamental frequency (RFF) shows promise as an acoustic estimate of laryngeal muscle tension, it is not yet transferable to the clinic. The purpose of this work was to refine algorithmic estimation of RFF, as well as to enhance the knowledge surrounding the physiological underpinnings of RFF. The first study used a large database of voice samples collected from 227 speakers with voice disorders and 256 typical speakers to evaluate the effects of fundamental frequency estimation techniques and voice sample characteristics on algorithmic RFF estimation. By refining fundamental frequency estimation using the Auditory Sawtooth Waveform Inspired Pitch Estimator—Prime (Auditory-SWIPE′) algorithm and accounting for sample characteristics via the acoustic measure, pitch strength, algorithmic errors related to the accuracy and precision of RFF were reduced by 88.4% and 17.3%, respectively. The second study sought to characterize the physiological factors influencing acoustic outputs of RFF estimation. A group of 53 speakers with voice disorders and 69 typical speakers each produced the utterance, /ifi/, while simultaneous recordings were collected using a microphone and flexible nasendoscope. Acoustic features calculated via the microphone signal were examined in reference to the physiological initiation and termination of vocal fold vibration. The features that corresponded with these transitions were then implemented into the RFF algorithm, leading to significant improvements in the precision of the RFF algorithm to reflect the underlying physiological mechanisms for voicing offsets (p < .001, V = .60) and onsets (p < .001, V = .54) when compared to manual RFF estimation. The third study further elucidated the physiological underpinnings of RFF by examining the contribution of vocal fold abduction to RFF during intervocalic voicing offsets. Vocal fold abductory patterns were compared to RFF values in a subset of speakers from the second study, comprising young adults, older adults, and older adults with Parkinson’s disease. Abductory patterns were not significantly different among the three groups; however, vocal fold abduction was observed to play a significant role in measures of RFF at voicing offset. By improving algorithmic estimation and elucidating aspects of the underlying physiology affecting RFF, this work adds to the utility of RFF for use in conjunction with current clinical techniques to assess laryngeal muscle tension.2021-09-29T00:00:00
    corecore