15 research outputs found

    Cepstral analysis of speech signals in the process of automatic pathological voice assessment

    Get PDF
    The paper describes the problem of cepstral speech analysis in the process of automated voicedisorder probability estimation. The author proposes to derive two of the most diagnosticallysignificant voice features: quality of harmonic structure and degree of subharmonic from cepstrumof speech signal. Traditionally, these attributes are estimated auricularly or by spectrum (orspectrogram) observation, hence this analysis often lacks accuracy and objectivity. The introducedparameters were calculated for the recordings from Disordered Voice Database (Kay, model 4337version 2.7.0) which consists of 710 voice samples (657 pathological, 53 healthy) recorded in thelaboratory environment and described with diagnosis and a number of additional attributes (suchas age, sex, nationality).The proposed cepstral voice features were compared to similar voice parameters derived fromMultidimensional Voice Program (Kay, model 5105 version 2.7.0) in respect to their diagnosticsignificance and presented graphically. The results show that cepstral features are more correlatedwith decision and better discriminate clusters of healthy and disordered voices. Additionally, bothparameters are obtained by single cepstral transform and do not require to perform F0 trackingearlier as it is derived simultaneously

    Cepstral peak prominence: a comprehensive analysis

    Full text link
    An analytical study of cepstral peak prominence (CPP) is presented, intended to provide an insight into its meaning and relation with voice perturbation parameters. To carry out this analysis, a parametric approach is adopted in which voice production is modelled using the traditional source-filter model and the first cepstral peak is assumed to have Gaussian shape. It is concluded that the meaning of CPP is very similar to that of the first rahmonic and some insights are provided on its dependence with fundamental frequency and vocal tract resonances. It is further shown that CPP integrates measures of voice waveform and periodicity perturbations, be them either amplitude, frequency or noise

    Optimizing acoustic and perceptual assessment of voice quality in children with vocal nodules

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 105-109).Few empirically-derived guidelines exist for optimizing the assessment of vocal function in children with voice disorders. The goal of this investigation was to identify a minimal set of speech tasks and associated acoustic analysis methods that are most salient in characterizing the impact of vocal nodules on vocal function in children. Hence, a pediatric assessment protocol was developed based on the standardized Consensus Auditory Perceptual Evaluation of Voice (CAPE-V) used to evaluate adult voices. Adult and pediatric versions of the CAPE-V protocols were used to gather recordings of vowels and sentences from adult females and children (4-6 and 8-10 year olds) with normal voices and vocal nodules, and these recordings were subjected to perceptual and acoustic analyses. Results showed that perceptual ratings for breathiness best characterized the presence of nodules in children's voices, and ratings for the production of sentences best differentiated normal voices and voices with nodules for both children and adults. Selected voice quality-related acoustic algorithms designed to quantitatively evaluate acoustic measures of vowels and sentences, were modified to be pitch-independent for use in analyzing children's voices. Synthesized vowels for children and adults were used to validate the modified algorithms by systematically assessing the effects of manipulating the periodicity and spectral characteristics of the synthesizer's voicing source.(cont.) In applying the validated algorithms to the recordings of subjects with normal voices and vocal nodules, the acoustic measure tended to differentiate normal voices and voices with nodules in children and adults, and some displayed significant correlations with the perceptual attributes of overall severity of dysphonia, roughness, and/or breathiness. None of the acoustic measures correlated significantly with the perceptual attribute of strain. Limitations in the strength of the correlations between acoustic measures and perceptual attributes were attributed to factors that can be addressed in future investigations, which can now utilize the algorithms that were developed in this investigation for children's voices. Preliminary recommendations are made for the clinical assessment of pediatric voice disorders.by Asako Masaki.Ph.D

    Acoustic measurement of overall voice quality in sustained vowels and continuous speech

    Get PDF
    Measurement of dysphonia severity involves auditory-perceptual evaluations and acoustic analyses of sound waves. Meta-analysis of proportional associations between these two methods showed that many popular perturbation metrics and noise-to-harmonics and others ratios do not yield reasonable results. However, this meta-analysis demonstrated that the validity of specific autocorrelation- and cepstrum-based measures was much more convincing, and appointed ‘smoothed cepstral peak prominence’ as the most promising metric of dysphonia severity. Original research confirmed this inferiority of perturbation measures and superiority of cepstral indices in dysphonia measurement of laryngeal-vocal and tracheoesophageal voice samples. However, to be truly representative for daily voice use patterns, measurement of overall voice quality is ideally founded on the analysis of sustained vowels ánd continuous speech. A customized method for including both sample types and calculating the multivariate Acoustic Voice Quality Index (i.e., AVQI), was constructed for this purpose. Original study of the AVQI revealed acceptable results in terms of initial concurrent validity, diagnostic precision, internal and external cross-validity and responsiveness to change. It thus was concluded that the AVQI can track changes in dysphonia severity across the voice therapy process. There are many freely and commercially available computer programs and systems for acoustic metrics of dysphonia severity. We investigated agreements and differences between two commonly available programs (i.e., Praat and Multi-Dimensional Voice Program) and systems. The results indicated that clinicians better not compare frequency perturbation data across systems and programs and amplitude perturbation data across systems. Finally, acoustic information can also be utilized as a biofeedback modality during voice exercises. Based on a systematic literature review, it was cautiously concluded that acoustic biofeedback can be a valuable tool in the treatment of phonatory disorders. When applied with caution, acoustic algorithms (particularly cepstrum-based measures and AVQI) have merited a special role in assessment and/or treatment of dysphonia severity

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    It Sounds like It Feels: Preliminary Exploration of an Aeroacoustic Diagnostic Protocol for Singers

    Get PDF
    To date, no established protocol exists for measuring functional voice changes in singers with subclinical singing-voice complaints. Hence, these may go undiagnosed until they progress into greater severity. This exploratory study sought to (1) determine which scale items in the self-perceptual Evaluation of Ability to Sing Easily (EASE) are associated with instrumental voice measures, and (2) construct as proof-of-concept an instrumental index related to singers’ perceptions of their vocal function and health status. Eighteen classical singers were acoustically recorded in a controlled environment singing an /a/ vowel using soft phonation. Aerodynamic data were collected during a softly sung /papapapapapapa/ task with the KayPENTAX Phonatory Aerodynamic System. Using multi and univariate linear regression techniques, CPPS, vibrato jitter, vibrato shimmer, and an efficiency ratio (SPL/PSub) were included in a significant model (p < 0.001) explaining 62.4% of variance in participants’ composite scores of three scale items related to vocal fatigue. The instrumental index showed a significant association (p = 0.001) with the EASE vocal fatigue subscale overall. Findings illustrate that an aeroacoustic instrumental index may be useful for monitoring functional changes in the singing voice as part of a multidimensional diagnostic approach to preventative and rehabilitative voice healthcare for professional singing-voice users

    Automatic Classification of Sustained Vowels Based on Signal Regularity Measures

    Get PDF
    En el año 1995 Ingo Titze propuso un sistema de clasificación de fonemas vocálicos en tres tipos (Tipo I,Tipo II y Tipo III) en base a la regularidad de la señal de voz cuasiperiódica correspondiente. En la práctica clı́nica fonoaudiológica, esta clasificación se realiza en base a la inspección visual de espectrogramas, no siendo claros los criterios que diferencian un tipo de voz de otra, especialmente entre los tipos I y II. En consecuencia, existe una granvariación interprofesional y una fuerte dependencia de la experiencia de cada especialista. Con el fin de lograr una clasificación objetiva basada en parámetros cuantitativos, se buscó extraer caracterı́sticas capaces de representar las diferencias fundamentales entre las voces de Tipos I y II, para luego clasificar una base de datos anotada. Se extrajeron parámetros acústicos clásicos, como medidas de Jitter y Shimmer, y harmonics to noise ratio (HNR) calculados utilizando PRAAT. También se propuso la utilización de la amplitud del primer ramónico (R1) y dos caracterı́sticas ideadas porlos autores de este trabajo: varianza normalizada de la primera componente principal (VNCP) y razones pico-valle (PV) espectrales. La clasificación se realizó mediante máquinas de soporte vectorial (SVM) de kernel lineal utilizando las caracterı́sticas que minimizan el error del clasificador. Como resultado, se obtuvo un error de validación cruzada de 11.61%, con porcentajes de acierto del 93.24% y 83.95%, para voces Tipo I y Tipo II respectivamente.Fil: Miramont, Juan Manuel. Universidad Nacional de Entre Ríos. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática; ArgentinaFil: Schlotthauer, Gaston. Universidad Nacional de Entre Ríos. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática - Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Investigación y Desarrollo en Bioingeniería y Bioinformática; Argentin

    The relationships among physiological, acoustical, and perceptual measures of vocal effort

    Full text link
    The purpose of this work was to explore the physiological mechanisms of vocal effort, the acoustical manifestation of vocal effort, and the perceptual interpretation of vocal effort by speakers and listeners. The first study evaluated four proposed mechanisms of vocal effort specific to the larynx: intrinsic laryngeal tension, extrinsic laryngeal tension, supraglottal compression, and subglottal pressure. Twenty-six healthy adults produced modulations of vocal effort (mild, moderate, maximal) and rate (slow, typical, fast), followed by self-ratings of vocal effort on a visual analog scale. Ten physiological measures across the four hypothesized mechanisms were captured via high-speed flexible laryngoscopy, surface electromyography, and neck-surface accelerometry. A mixed-effects backward stepwise regression analysis revealed that estimated subglottal pressure, mediolateral supraglottal compression, and a normalized percent activation of extrinsic suprahyoid muscles significantly increased as ratings of vocal effort increased (R2 = .60). The second study had twenty inexperienced listeners rate vocal effort on the speech recordings from the first study (typical, mild, moderate, and maximal effort) via a visual sort-and-rate method. A set of acoustical measures were calculated, including amplitude-, time-, spectral-, and cepstral-based measures. Two separate mixed-effects regression models determined the relationship between the acoustical predictors and speaker and listener ratings. Results indicated that mean sound pressure level, low-to-high spectral ratio, and harmonic-to-noise ratio significantly predicted speaker and listener ratings. Mean fundamental frequency (measured as change in semitones from typical productions) and relative fundamental frequency offset cycle 10 were also significant predictors of listener ratings. The acoustical predictors accounted for 72% and 82% of the variance in speaker and listener ratings, respectively. Speaker and listener ratings were also highly correlated (average r = .86). From these two studies, we determined that vocal effort is a complex physiological process that is mediated by changes in laryngeal configuration and subglottal pressure. The self-perception of vocal effort is related to the acoustical properties underlying these physiological changes. Listeners appear to rely on the same acoustical manifestations as speakers, yet incorporate additional time-based acoustical cues during perceptual judgments. Future work should explore the physiological, acoustical, and perceptual measures identified here in speakers with voice disorders.2019-07-06T00:00:00

    SPECTRAL/CEPSTRAL ANALYSIS OF VOICE QUALITY IN PATIENTS WITH PARKINSONS DISEASE

    Get PDF
    The purpose of this dissertation was to determine whether Silverman Voice Treatment (LSVT) affects cepstral/spectral measures of voice quality in speakers with idiopathic Parkinsons Disease (PD). The first study investigated the effect of LSVT on cepstral/spectral measures of sustained // vowels to determine whether voice quality improves. Few studies have investigated the effects of LSVT on voice quality using acoustic measures, and none have used cepstral measures. The first study investigated the effect of LSVT on cepstral/spectral analyses of sustained // vowels produced by speakers. Sustained vowels were analyzed for cepstral peak prominence (CPP), CPP Standard Deviation (CPP-SD), Low/High Spectral Ratio (L/H SR), and Cepstral/Spectral Index of Dysphonia (CSID) using the Analysis of Dysphonia in Speech and Voice (ADSV) program. The study found both improved harmonic structure and voice quality as reflected in cepstral/spectral measures. Voice quality in connected speech is important because it is representative of how a typical individual communicates. Thus, the second studys goals were: First, to investigate the effect of LSVT on cepstral/spectral analysis of connected speech; and second, to compare cepstral/spectral analyses findings in connected speech with findings observed in sustained phonation. Another goal was to examine individual differences in response to treatment and compare them to individual changes observed in sustained phonation. The results demonstrated that CPP increased significantly following LSVT, indicating improved harmonic dominance as a result of treatment, and CSID decreased following LSVT, indicating a reduction of the overall severity in connected speech at the group level. Analysis of individual differences demonstrated that only four participants improved by at least one half Standard Deviation (SD) following treatment in CPP, CPP-SD, and CSID in both sustained phonation and connected speech tasks. Three showed a reduction in L/H SR in sustained phonation and only one showed an increase in L/H SR in connected speech. The other participants improvement varied, but the majority demonstrated voice quality improvement in sustained phonation. The overall results indicated that CPP and CSID were strong acoustic measures for demonstrating voice quality improvement following treatment in both tasks connected speech and sustained phonation

    Early Human Vocalization Development: A Collection of Studies Utilizing Automated Analysis of Naturalistic Recordings and Neural Network Modeling

    Get PDF
    Understanding early human vocalization development is a key part of understanding the origins of human communication. What are the characteristics of early human vocalizations and how do they change over time? What mechanisms underlie these changes? This dissertation is a collection of three papers that take a computational approach to addressing these questions, using neural network simulation and automated analysis of naturalistic data.The first paper uses a self-organizing neural network to automatically derive holistic acoustic features characteristic of prelinguistic vocalizations. A supervised neural network is used to classify vocalizations into human-judged categories and to predict the age of the child vocalizing. The study represents a first step toward taking a data-driven approach to describing infant vocalizations. Its performance in classification represents progress toward developing automated analysis tools for coding infant vocalization types.The second paper is a computational model of early vocal motor learning. It adapts a popular type of neural network, the self-organizing map, in order to control a vocal tract simulator and in order to have learning be dependent on whether the model\u27s actions are reinforced. The model learns both to control production of sound at the larynx (phonation), an early-developing skill that is a prerequisite for speech, and to produce vowels that gravitate toward the vowels in a target language (either English or Korean) for which it is reinforced. The model provides a computationally-specified explanation for how neuromotor representations might be acquired in infancy through the combination of exploration, reinforcement, and self-organized learning.The third paper utilizes automated analysis to uncover patterns of vocal interaction between child and caregiver that unfold over the course of day-long, totally naturalistic recordings. The participants include 16- to 48-month-old children with and without autism. Results are consistent with the idea that there is a social feedback loop wherein children produce speech-related vocalizations, these are preferentially responded to by adults, and this contingency of adult response shapes future child vocalizations. Differences in components of this feedback loop are observed in autism, as well as with different maternal education levels
    corecore