50 research outputs found

    Influence of TTS systems performance on reaction times in people with aphasia

    Get PDF
    Text-to-speech (TTS) systems provide fundamental reading support for people with aphasia and reading difficulties. However, artificial voices are more difficult to process than natural voices. The current study is an extended analysis of the results of a clinical experiment investigating which, among three artificial voices and a digitised human voice, is more suitable for people with aphasia and reading impairments. Such results show that the voice synthesised with Ogmios TTS, a concatenative speech synthesis system, caused significantly slower reaction times than the other three voices used in the experiment. The present study explores whether and what voice quality metrics are linked to delayed reaction times. For this purpose, the voices were analysed using an automatic assessment of intelligibility, naturalness, and jitter and shimmer voice quality parameters. This analysis revealed that Ogmios TTS, in general, performed worse than the other voices in all parameters. These observations could explain the significantly delayed reaction times in people with aphasia and reading impairments when listening to Ogmios TTS and could open up consideration about which TTS to choose for compensative devices for these patients based on the voice analysis of these parameters

    Communication profiles in severe aphasia: the roles of supportive strategies and of the communication partner

    Get PDF
    In communication, aphasic persons with limited speech rely on supportive strategies and on the help of the communication partner. The RIJST is a new tool assessing both aspects of aphasic communication. In a group of patients with similar severe verbal deficits four different communication profiles were observed. These profiles differ both in the use of supportive strategies and in the amount of help needed from the partner. The results are highly relevant for communication therapy and offer insight in the discussion concerning the relation between verbal deficits and communicative abilities in severe aphasia

    Melodic Intonation Therapy in subacute aphasia

    Get PDF
    Melodic Intonation Therapy (MIT) is based on the observation that persons with severe nonfluent aphasia are often able to sing words or even short phrases they cannot produce during speech. MIT uses the melodic elements of speech, such as intonation and rhythm, to facilitate and improve language production. Although clinicians disagree about the usefulness of MIT, it has been translated into several languages and is frequently applied worldwide. Many studies have reported successful application of MIT. However, most studies are case-studies without control condition in chronic patients. Hence, the level of evidence for MIT is low and little is known about its effect in earlier phases post stroke, when treatment interacts with processes of spontaneous recovery. We examined MIT in the subacute phase post stroke. The purpose of this multicenter study was threefold. First, we evaluated the efficacy of MIT in the subacute phase. Further, we examined the effect of the timing of MIT in this early phase post stroke. Thirdly, we investigated potential determinants influencing therapy outcome

    Insight into the neurophysiological processes of melodically intoned language with functional MRI

    Get PDF
    Background: Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. Methods: Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. Results: Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Discussion: Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. Conclusion: Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. As such, they provide a sound basis for further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT

    Communicating simply, but not too simply: Reporting of participants and speech and language interventions for aphasia after stroke

    Get PDF
    Purpose: Speech and language pathology (SLP) for aphasia is a complex intervention delivered to a heterogeneous population within diverse settings. Simplistic descriptions of participants and interventions in research hinder replication, interpretation of results, guideline and research developments through secondary data analyses. This study aimed to describe the availability of participant and intervention descriptors in existing aphasia research datasets. Method: We systematically identified aphasia research datasets containing ≥10 participants with information on time since stroke and language ability. We extracted participant and SLP intervention descriptions and considered the availability of data compared to historical and current reporting standards. We developed an extension to the Template for Intervention Description and Replication checklist to support meaningful classification and synthesis of the SLP interventions to support secondary data analysis. Result: Of 11, 314 identified records we screened 1131 full texts and received 75 dataset contributions. We extracted data from 99 additional public domain datasets. Participant age (97.1%) and sex (90.8%) were commonly available. Prior stroke (25.8%), living context (12.1%) and socio-economic status (2.3%) were rarely available. Therapy impairment target, frequency and duration were most commonly available but predominately described at group level. Home practice (46.3%) and tailoring (functional relevance 46.3%) were inconsistently available. Conclusion : Gaps in the availability of participant and intervention details were significant, hampering clinical implementation of evidence into practice and development of our field of research. Improvements in the quality and consistency of participant and intervention data reported in aphasia research are required to maximise clinical implementation, replication in research and the generation of insights from secondary data analysis. Systematic review registration: PROSPERO CRD4201811094
    corecore