2 research outputs found

    Support à l’enseignement et la recherche en voix et parole pathologiques à l’aide des logiciels VOCALAB et DIADOLAB

    Get PDF
    International audienceDans cet article nous dĂ©crivons les aspects pĂ©dagogiques et recherche des projets VOCALAB et DIADOLAB. Nous rappelons les Ă©lĂ©ments de motivation au dĂ©veloppement d’outils objectifs d’analyse de la voix et de la parole, principalement Ă  destination des orthophonistes cliniciens mais aussi en formation initiale. Les dĂ©marches de recherche et les chantiers « collectifs » de base de donnĂ©es de voix et de parole sont explicitĂ©s. Le support Ă  l’encadrement de mĂ©moires d’orthophonie a nĂ©cessitĂ© la construction d’outils statistiques spĂ©cifiques qui permettent d’obtenir des valeurs rĂ©fĂ©rentielles sur un grand nombre de cas de voix et de parole. Nous justifions aussi l’apport des logiciels d’évaluation objective pour la profession d’orthophoniste dans une dĂ©marche scientifique et probante

    Comparison of In-Person and Online Recordings in the Clinical Teleassessment of Speech Production: A Pilot Study.

    Get PDF
    In certain circumstances, speech and language therapy is proposed in telepractice as a practical alternative to in-person services. However, little is known about the minimum quality requirements of recordings in the teleassessment of motor speech disorders (MSD) utilizing validated tools. The aim here is to examine the comparability of offline analyses based on speech samples acquired from three sources: (1) in-person recordings with high quality material, serving as the baseline/gold standard; (2) in-person recordings with standard equipment; (3) online recordings from videoconferencing. Speech samples were recorded simultaneously from these three sources in fifteen neurotypical speakers performing a screening battery of MSD and analyzed by three speech and language therapists. Intersource and interrater agreements were estimated with intraclass correlation coefficients on seventeen perceptual and acoustic parameters. While the interrater agreement was excellent for most speech parameters, especially on high quality in-person recordings, it decreased in online recordings. The intersource agreement was excellent for speech rate and mean fundamental frequency measures when comparing high quality in-person recordings to the other conditions. The intersource agreement was poor for voice parameters, but also for perceptual measures of intelligibility and articulation. Clinicians who plan to teleassess MSD should adapt their recording setting to the parameters they want to reliably interpret
    corecore