8 research outputs found

    The effects of intensive speech treatment on conversational intelligibility in Spanish speakers with Parkinson’s disease

    Get PDF
    Purpose: To examine the effects of intensive speech treatment on the conversational intelligibility of Castilian Spanish speakers with Parkinson’s disease (PD), as well as on the speakers’ self-perceptions of disability. Method: Fifteen speakers with a medical diagnosis of PD participated in this study. Speech recordings were completed twice before treatment, immediately post-treatment and at a one-month follow-up session. Conversational intelligibility was assessed in two ways—transcription accuracy scores and intelligibility ratings on a 9-point Likert scale. The Voice Handicap Index (VHI) was administered as a measure of self-perceived disability. Results: Group data revealed that transcription accuracy and median ease-of-understanding ratings increased significantly immediately post-treatment, with gains maintained at the one-month follow-up. The functional subscale of the VHI decreased significantly post-treatment, suggesting a decrease in perceived communication disability after speech treatment. Conclusion: These findings support the implementation of intensive voice treatment to improve conversational intelligibility in Spanish speakers with PD with dysarthria as well as to improve the speakers' perception of their daily communicative capabilities. Clinical and theoretical considerations are discussed

    Automatic Assessment of Intelligibility in Noise in Parkinson Disease: Validation Study

    No full text
    BackgroundMost individuals with Parkinson disease (PD) experience a degradation in their speech intelligibility. Research on the use of automatic speech recognition (ASR) to assess intelligibility is still sparse, especially when trying to replicate communication challenges in real-life conditions (ie, noisy backgrounds). Developing technologies to automatically measure intelligibility in noise can ultimately assist patients in self-managing their voice changes due to the disease. ObjectiveThe goal of this study was to pilot-test and validate the use of a customized web-based app to assess speech intelligibility in noise in individuals with dysarthria associated with PD. MethodsIn total, 20 individuals with dysarthria associated with PD and 20 healthy controls (HCs) recorded a set of sentences using their phones. The Google Cloud ASR API was used to automatically transcribe the speakers’ sentences. An algorithm was created to embed speakers’ sentences in +6-dB signal-to-noise multitalker babble. Results from ASR performance were compared to those from 30 listeners who orthographically transcribed the same set of sentences. Data were reduced into a single event, defined as a success if the artificial intelligence (AI) system transcribed a random speaker or sentence as well or better than the average of 3 randomly chosen human listeners. These data were further analyzed by logistic regression to assess whether AI success differed by speaker group (HCs or speakers with dysarthria) or was affected by sentence length. A discriminant analysis was conducted on the human listener data and AI transcriber data independently to compare the ability of each data set to discriminate between HCs and speakers with dysarthria. ResultsThe data analysis indicated a 0.8 probability (95% CI 0.65-0.91) that AI performance would be as good or better than the average human listener. AI transcriber success probability was not found to be dependent on speaker group. AI transcriber success was found to decrease with sentence length, losing an estimated 0.03 probability of transcribing as well as the average human listener for each word increase in sentence length. The AI transcriber data were found to offer the same discrimination of speakers into categories (HCs and speakers with dysarthria) as the human listener data. ConclusionsASR has the potential to assess intelligibility in noise in speakers with dysarthria associated with PD. Our results hold promise for the use of AI with this clinical population, although a full range of speech severity needs to be evaluated in future work, as well as the effect of different speaking tasks on ASR
    corecore