95 research outputs found

    Vowel space in hypokinetic dysarthria: : Preliminary investigations.

    Get PDF
    The paper discusses acoustic and articulatory data on the use of vowel space by speaker affected by Parkinson’s Disease who developed hypokinetic dysarthria. Two experiments involving pathological subjects and matching controls are described, whose general aim is to better understand if the vowel space in Parkinson’s Disease dysarthric subjects is always and homogeneously reduced. In the first investigation, acoustic and kinematic data are collected and analyzed to test if pathological speakers always use a reduced vowel space compared to control subjects, and if they adopt different articulatory strategies depending on the axis of the speech gesture (vertical vs horizontal). In the second investigation, various articulatory metrics are used to better investigate the dimension and position of the acoustic vowel space, and if they change in Parkinson’s Disease subjects compared to controls. Results show that reduction takes place, but some subjects appear to compensate, widening their tongue gestures on the horizontal axis even though the lip gesture is not necessarily undershot. Nevertheless, metrics used in the second experiment do not allow to capture a reduction, even though, in line with results of the first experiment, they point to an asymmetry in the vowel space used depending on the axis considered

    Jaw Rotation in Dysarthria Measured With a Single Electromagnetic Articulography Sensor

    Get PDF
    Purpose This study evaluated a novel method for characterizing jaw rotation using orientation data from a single electromagnetic articulography sensor. This method was optimized for clinical application, and a preliminary examination of clinical feasibility and value was undertaken. Method The computational adequacy of the single-sensor orientation method was evaluated through comparisons of jaw-rotation histories calculated from dual-sensor positional data for 16 typical talkers. The clinical feasibility and potential value of single-sensor jaw rotation were assessed through comparisons of 7 talkers with dysarthria and 19 typical talkers in connected speech. Results The single-sensor orientation method allowed faster and safer participant preparation, required lower data-acquisition costs, and generated less high-frequency artifact than the dual-sensor positional approach. All talkers with dysarthria, regardless of severity, demonstrated jaw-rotation histories with more numerous changes in movement direction and reduced smoothness compared with typical talkers. Conclusions Results suggest that the single-sensor orientation method for calculating jaw rotation during speech is clinically feasible. Given the preliminary nature of this study and the small participant pool, the clinical value of such measures remains an open question. Further work must address the potential confound of reduced speaking rate on movement smoothness

    Towards Improving The Evaluation Of Speech Production Deficits In Chronic Stroke

    Get PDF
    One of the most devastating consequences of stroke is aphasia - a disorder that impairs communication across the domains of expressive and receptive language. In addition to language difficulties, stroke survivors may struggle with disruptions in speech motor planning and/or execution processes (i.e., a motor speech disorder, MSD). The clinical management of MSDs has been challenged by debates regarding their theoretical nature and clinical manifestations. This is especially true for differentiating speech production errors that can be attributed to aphasia (i.e., phonemic paraphasias) from lower-level motor planning/programming impairments (i.e., articulation errors that occur in apraxia of speech; AOS). Therefore, the purposes of this study were 1) to identify objective measures that have the greatest discriminative weight in diagnostic classification of AOS, and 2) using neuroimaging, to localize patterns of brain damage predictive of these behaviors. Method: Stroke survivors (N=58; 21 female; mean age=61.03±10.01; months post-onset=66.07±52.93) were recruited as part of a larger study. Participants completed a thorough battery of speech and language testing and underwent a series of magnetic resonance imaging (MRI) sequences. Objective, acoustic measures were obtained from three connected speech samples. These variables quantified inter-articulatory planning, speech rhythm and prosody, and speech fluency. The number of phonemic and distortion errors per sample was also quantified. All measures were analyzed for group differences, and variables were subject to a linear discriminant analysis (LDA) to determine which served as the best predictor of AOS. MRI data were analyzed with voxel-based lesionsymptom mapping and connectome-symptom mapping to relate patterns of cortical necrosis and white matter compromise to different aspects of disordered speech. Results: Participants with both AOS and aphasia generally demonstrated significantly poorer performance across all production measures when compared to those with aphasia as their only impairment, and compared to those with no detectable speech or language impairment. The LDA model with the greatest classification accuracy correctly predicted 90.7% of cases. Neuroimaging analysis indicated that damage to mostly unique regions of the pre- and post-central gyri, the supramarginal gyrus, and white matter connections between these regions and subcortical structures was related to impaired speech production. Conclusions: Results support and build upon recent studies that have sought to improve the assessment of post-stroke speech production. Findings are discussed with regard to contemporary models of speech production, guided by the overarching goal of refining the clinical evaluation and theoretical explanations of AOS

    A computational model of the relationship between speech intelligibility and speech acoustics

    Get PDF
    abstract: Speech intelligibility measures how much a speaker can be understood by a listener. Traditional measures of intelligibility, such as word accuracy, are not sufficient to reveal the reasons of intelligibility degradation. This dissertation investigates the underlying sources of intelligibility degradations from both perspectives of the speaker and the listener. Segmental phoneme errors and suprasegmental lexical boundary errors are developed to reveal the perceptual strategies of the listener. A comprehensive set of automated acoustic measures are developed to quantify variations in the acoustic signal from three perceptual aspects, including articulation, prosody, and vocal quality. The developed measures have been validated on a dysarthric speech dataset with various severity degrees. Multiple regression analysis is employed to show the developed measures could predict perceptual ratings reliably. The relationship between the acoustic measures and the listening errors is investigated to show the interaction between speech production and perception. The hypothesize is that the segmental phoneme errors are mainly caused by the imprecise articulation, while the sprasegmental lexical boundary errors are due to the unreliable phonemic information as well as the abnormal rhythm and prosody patterns. To test the hypothesis, within-speaker variations are simulated in different speaking modes. Significant changes have been detected in both the acoustic signals and the listening errors. Results of the regression analysis support the hypothesis by showing that changes in the articulation-related acoustic features are important in predicting changes in listening phoneme errors, while changes in both of the articulation- and prosody-related features are important in predicting changes in lexical boundary errors. Moreover, significant correlation has been achieved in the cross-validation experiment, which indicates that it is possible to predict intelligibility variations from acoustic signal.Dissertation/ThesisDoctoral Dissertation Speech and Hearing Science 201

    Improving the intelligibility of dysarthric speech using a time domain pitch synchronous-based approach

    Get PDF
    Dysarthria is a motor speech impairment that reduces the intelligibility of speech. Observations indicate that for different types of dysarthria, the fundamental frequency, intensity, and speech rate of speech are distinct from those of unimpaired speakers. Therefore, the proposed enhancement technique modifies these parameters so that they fall in the range for unimpaired speakers. The fundamental frequency and speech rate of dysarthric speech are modified using the time domain pitch synchronous overlap and add (TD-PSOLA) algorithm. Then its intensity is modified using the fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT)-based approach. This technique is applied to impaired speech samples of ten dysarthric speakers. After enhancement, the intelligibility of impaired and enhanced dysarthric speech is evaluated. The change in the intelligibility of impaired and enhanced dysarthric speech is evaluated using the rating scale and word count methods. The improvement in intelligibility is significant for speakers whose original intelligibility was poor. In contrast, the improvement in intelligibility was minimal for speakers whose intelligibility was already high. According to the rating scale method, for diverse speakers, the change in intelligibility ranges from 9% to 53%. Whereas, according to the word count method, this change in intelligibility ranges from 0% to 53%

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies

    Vowel Production in Down Syndrome: An Ultrasound Study

    Full text link
    The present study investigated the articulatory and acoustic characteristics of vowel production in individuals with Down syndrome (DS). Speech production deficits and reduced intelligibility are consistently noted in this population, attributed to any combination of phonological, structural, and/or motor control deficits. Speakers with DS have demonstrated impaired vowel production, as indicated by perceptual, acoustic, and articulatory data, with emerging evidence of vowel centralization. Participants in the study included eight young adults with DS, as well as eight age- and gender-matched controls. Ultrasound imaging was utilized to obtain midsagittal tongue contours during single-word productions, specifically targeting the corner vowels /ɑ/, /æ/, /i/, and /u/. Measurements of tongue shape, as related to its curvature and vowel differentiation, were calculated and contrasted between the participant groups. Acoustic measures of vowel centralization and variability of production were applied to concurrent vowel data. Single-word intelligibility testing was also conducted for speakers with DS, to obtain intelligibility scores and for analysis of error patterns. Results of the analyses demonstrated consistent differentiation for low vowel production between the two speaker groups, across both articulatory and acoustic measures. Speakers with DS exhibited reduced tongue shape curvature and/or complexity of low vowels /ɑ/ and /æ/, and high-vowel /u/, than did TD speakers, as well as some evidence of reduced differentiation between tongue shapes of all four corner vowels. Acoustic analysis revealed a lack of group differentiation across some metrics of vowel centralization, while a reduction in acoustic space dispersion from a centroid was demonstrated for the low vowels in speakers with DS. Increased variability of acoustic data was also noted among speakers in the DS group in comparison to TD controls. Single-word intelligibility scores correlated strongly with measures of acoustic variability among speakers with DS, and moderately with measures of articulatory differentiation. Clinical implications, as related to understanding the nature of the impairment in DS and effective treatment planning, are discussed

    Automatic Screening of Childhood Speech Sound Disorders and Detection of Associated Pronunciation Errors

    Full text link
    Speech disorders in children can affect their fluency and intelligibility. Delay in their diagnosis and treatment increases the risk of social impairment and learning disabilities. With the significant shortage of Speech and Language Pathologists (SLPs), there is an increasing interest in Computer-Aided Speech Therapy tools with automatic detection and diagnosis capability. However, the scarcity and unreliable annotation of disordered child speech corpora along with the high acoustic variations in the child speech data has impeded the development of reliable automatic detection and diagnosis of childhood speech sound disorders. Therefore, this thesis investigates two types of detection systems that can be achieved with minimum dependency on annotated mispronounced speech data. First, a novel approach that adopts paralinguistic features which represent the prosodic, spectral, and voice quality characteristics of the speech was proposed to perform segment- and subject-level classification of Typically Developing (TD) and Speech Sound Disordered (SSD) child speech using a binary Support Vector Machine (SVM) classifier. As paralinguistic features are both language- and content-independent, they can be extracted from an unannotated speech signal. Second, a novel Mispronunciation Detection and Diagnosis (MDD) approach was introduced to detect the pronunciation errors made due to SSDs and provide low-level diagnostic information that can be used in constructing formative feedback and a detailed diagnostic report. Unlike existing MDD methods where detection and diagnosis are performed at the phoneme level, the proposed method achieved MDD at the speech attribute level, namely the manners and places of articulations. The speech attribute features describe the involved articulators and their interactions when making a speech sound allowing a low-level description of the pronunciation error to be provided. Two novel methods to model speech attributes are further proposed in this thesis, a frame-based (phoneme-alignment) method leveraging the Multi-Task Learning (MTL) criterion and training a separate model for each attribute, and an alignment-free jointly-learnt method based on the Connectionist Temporal Classification (CTC) sequence to sequence criterion. The proposed techniques have been evaluated using standard and publicly accessible adult and child speech corpora, while the MDD method has been validated using L2 speech corpora

    Robust and language-independent acoustic features in Parkinson's disease

    Get PDF
    Introduction: The analysis of vocal samples from patients with Parkinson's disease (PDP) can be relevant in supporting early diagnosis and disease monitoring. Intriguingly, speech analysis embeds several complexities influenced by speaker characteristics (e.g., gender and language) and recording conditions (e.g., professional microphones or smartphones, supervised, or non-supervised data collection). Moreover, the set of vocal tasks performed, such as sustained phonation, reading text, or monologue, strongly affects the speech dimension investigated, the feature extracted, and, as a consequence, the performance of the overall algorithm. Methods: We employed six datasets, including a cohort of 176 Healthy Control (HC) participants and 178 PDP from different nationalities (i.e., Italian, Spanish, Czech), recorded in variable scenarios through various devices (i.e., professional microphones and smartphones), and performing several speech exercises (i.e., vowel phonation, sentence repetition). Aiming to identify the effectiveness of different vocal tasks and the trustworthiness of features independent of external co-factors such as language, gender, and data collection modality, we performed several intra- and inter-corpora statistical analyses. In addition, we compared the performance of different feature selection and classification models to evaluate the most robust and performing pipeline. Results: According to our results, the combined use of sustained phonation and sentence repetition should be preferred over a single exercise. As for the set of features, the Mel Frequency Cepstral Coefficients demonstrated to be among the most effective parameters in discriminating between HC and PDP, also in the presence of heterogeneous languages and acquisition techniques. Conclusion: Even though preliminary, the results of this work can be exploited to define a speech protocol that can effectively capture vocal alterations while minimizing the effort required to the patient. Moreover, the statistical analysis identified a set of features minimally dependent on gender, language, and recording modalities. This discloses the feasibility of extensive cross-corpora tests to develop robust and reliable tools for disease monitoring and staging and PDP follow-up
    • …
    corecore