474 research outputs found

    Profiling risk factors for chronic uveitis in juvenile idiopathic arthritis: a new model for EHR-based research.

    Get PDF
    BackgroundJuvenile idiopathic arthritis is the most common rheumatic disease in children. Chronic uveitis is a common and serious comorbid condition of juvenile idiopathic arthritis, with insidious presentation and potential to cause blindness. Knowledge of clinical associations will improve risk stratification. Based on clinical observation, we hypothesized that allergic conditions are associated with chronic uveitis in juvenile idiopathic arthritis patients.MethodsThis study is a retrospective cohort study using Stanford's clinical data warehouse containing data from Lucile Packard Children's Hospital from 2000-2011 to analyze patient characteristics associated with chronic uveitis in a large juvenile idiopathic arthritis cohort. Clinical notes in patients under 16 years of age were processed via a validated text analytics pipeline. Bivariate-associated variables were used in a multivariate logistic regression adjusted for age, gender, and race. Previously reported associations were evaluated to validate our methods. The main outcome measure was presence of terms indicating allergy or allergy medications use overrepresented in juvenile idiopathic arthritis patients with chronic uveitis. Residual text features were then used in unsupervised hierarchical clustering to compare clinical text similarity between patients with and without uveitis.ResultsPreviously reported associations with uveitis in juvenile idiopathic arthritis patients (earlier age at arthritis diagnosis, oligoarticular-onset disease, antinuclear antibody status, history of psoriasis) were reproduced in our study. Use of allergy medications and terms describing allergic conditions were independently associated with chronic uveitis. The association with allergy drugs when adjusted for known associations remained significant (OR 2.54, 95% CI 1.22-5.4).ConclusionsThis study shows the potential of using a validated text analytics pipeline on clinical data warehouses to examine practice-based evidence for evaluating hypotheses formed during patient care. Our study reproduces four known associations with uveitis development in juvenile idiopathic arthritis patients, and reports a new association between allergic conditions and chronic uveitis in juvenile idiopathic arthritis patients

    Challenges of developing a digital scribe to reduce clinical documentation burden.

    Full text link
    Clinicians spend a large amount of time on clinical documentation of patient encounters, often impacting quality of care and clinician satisfaction, and causing physician burnout. Advances in artificial intelligence (AI) and machine learning (ML) open the possibility of automating clinical documentation with digital scribes, using speech recognition to eliminate manual documentation by clinicians or medical scribes. However, developing a digital scribe is fraught with problems due to the complex nature of clinical environments and clinical conversations. This paper identifies and discusses major challenges associated with developing automated speech-based documentation in clinical settings: recording high-quality audio, converting audio to transcripts using speech recognition, inducing topic structure from conversation data, extracting medical concepts, generating clinically meaningful summaries of conversations, and obtaining clinical data for AI and ML algorithms

    Medical Transcriptionist’s Experience with Speech Recognition Technology

    Get PDF
    The medical transcription industry is rapidly evolving in terms of services and revenues for the last decade. This ITES contributed the largest employment growth rate in IT-BPO in 2013. The success of this industry was assisted by recent technology like Speech Recognition Technology (SRT). Because such technologies depend on people, there is a need to study on the experiences of the people behind those achievements. This paper addresses this gap by exploring Medical Transcriptionist’s (MTs) experiences using SRT. Findings revealed at least five themes prevalent to the experiences of MTs including audio file classification, valuable characteristics, negative observations, technostress coping, and highest quality orientation. This paper suggests that by looking at the experiences of MTs, current and future employers can gain insights in improving and enriching these outsourcing services. Furthermore, the presence of common themes indicates the possibility of performing a grounded theory based on substantive area of medical transcription

    ASR Error Detection via Audio-Transcript entailment

    Full text link
    Despite improved performances of the latest Automatic Speech Recognition (ASR) systems, transcription errors are still unavoidable. These errors can have a considerable impact in critical domains such as healthcare, when used to help with clinical documentation. Therefore, detecting ASR errors is a critical first step in preventing further error propagation to downstream applications. To this end, we propose a novel end-to-end approach for ASR error detection using audio-transcript entailment. To the best of our knowledge, we are the first to frame this problem as an end-to-end entailment task between the audio segment and its corresponding transcript segment. Our intuition is that there should be a bidirectional entailment between audio and transcript when there is no recognition error and vice versa. The proposed model utilizes an acoustic encoder and a linguistic encoder to model the speech and transcript respectively. The encoded representations of both modalities are fused to predict the entailment. Since doctor-patient conversations are used in our experiments, a particular emphasis is placed on medical terms. Our proposed model achieves classification error rates (CER) of 26.2% on all transcription errors and 23% on medical errors specifically, leading to improvements upon a strong baseline by 12% and 15.4%, respectively.Comment: Accepted to Interspeech 202

    University music students' use of cognitive strategies in transcribing figured bass dictation and the possible influence of memory span on their performance

    Get PDF
    Les compétences auditives musicales développées, en partie, pendant les cours de formation auditive (FA),sont essentielles à la formation des musiciens afin d'accroître l'écoute intérieure (Rogers, 1984). Plusieurs auteurs s'accordent sur l'importance d'un bon développement de l'oreille comme base de tout progrès et activité musicale, telles que l'écoute et l'interprétation (Elliot 1993; Hallam et Bautista, 2012; Karpinski, 2000; King et Brook, 2016; Lake, 1993; Langer; 1953; McPherson, Bailey et Sinclair, 1997; Rogers, 1984; Rogers 2013; Upitis, Abrami, Varela, 2016). La transcription d'une dictée musicale étant l'un des moyens les plus utilisés pour développer l'écoute intérieure s'avère un défi pour de nombreux étudiants en difficulté (Cruz de Menezes, 2010;Hedges, 1999; Hoppe, 1991). Malgré l'importance de cette tâche, les processus sous-jacents à leur résolution ne sont pas encore bien compris, en particulier ceux reliés à la dictée de basse chiffrée. Cela pose un défi constant aux enseignants. Une meilleure compréhension des processus mentaux des apprenants engagés lors des tâches de dictée de basse chiffrée et de la façon dont les élèves les déploient pourrait apporter des solutions aux enseignants. De tels connaissances pourraient indiquer les approches pédagogiques à privilégier et les stratégies s'avérant efficaces pour aider les élèves à surmonter leurs difficultés. Afin de combler les lacunes dans ce domaine, cette recherche a été élaborée pour atteindre six objectifs principaux: a) énumérer les stratégies utilisées par les étudiants au début de leur formation universitaire; b) catégoriser les stratégies; c)identifier les stratégies les plus utilisées et les plus efficaces; d) analyser d'autres facteurs cognitifs qui peuvent influencer l'utilisation des stratégies, tels que la capacité de mémoire de travail auditive musicale et non musicale; e) analyser l'incidence de l'utilisation des stratégies et d'autres variables sur le degré de performance des dictées; f) vérifier si les stratégies et les résultats des dictées changent après une session de cours de FA. Pour atteindre ces objectifs, 66 étudiants débutant leur cours de musique universitaire ont participé à cette étude. Ils ont décrit les stratégies utilisées lors de la résolution de dictées harmoniques, passé deux tests de mémoire (musical et non musical) et répondu à un questionnaire afin de récolter des informations de base telles que leur sexe, leur instrument, leur style musical et la durée de leurs études musicales. D'une part, cette recherche a permis de répertorier et de catégoriser les stratégies utilisées pour résoudre les dictées de basse chiffrée de manière approfondie. D'autre part, à l'aide de corrélations, analyses de variances et de covariance, régressions et tests-t, cette étude a permis de comprendre le lien qui existe entre les stratégies et le degré de performance pour la résolution de dictées de basse chiffrée et de vérifier si l'utilisation des stratégies et les résultats des dictées ont changé dans le temps, après avoir suivi des cours de FA universitaire. De plus, nous avons vérifié la relation qui existe entre le degré de performance pour ce type de dictées et les capacités des mémoires auditives (musicale et non-musicale) et avec d'autres variables telles que l'instrument et l'âge de début des études musicales. Cette thèse est organisée en quatre chapitres : le chapitre 1 présente une revue de la littérature; Chapitre 2, la méthodologie; Chapitre 3, toutes les analyses qualitatives et quantitatives effectuées en réponse aux questions de recherche; et le dernier chapitre, discussion des résultats et conclusion.Music aural skills, partly developed during ear training (ET) courses, are fundamental to musicians' training in order to develop inner audition (Rogers, 1984). Authors agree about the importance of a good ear development as the basis for all musical progress and activities, such as listening and performing (Elliot 1993; Hallam & Bautista, 2012; Karpinski, 2000; King & Brook, 2016; Lake, 1993; Langer; 1953; McPherson, Bailey & Sinclair, 1997; Rogers, 1984; Rogers 2013; Upitis, Abrami, Varela, 2016). Musical dictation transcription, being one of the most used ways to develop inner audition is a challenge to be faced by many students in difficulty (Cruz de Menezes, 2010; Hedges, 1999; Hoppe, 1991). Despite the importance of this task, the underlying processes are not yet fully understood, especially those related to figured bass dictation. This poses an abiding challenge for teachers. A better understanding of students' mental processes engaged during dictation tasks, and how students deploy such processes, could provide teachers with solutions. Results might suggest which pedagogical approaches to privilege, and which strategies might be effective to help students overcome their difficulties. To fill the gap in this field, this research was elaborated with six main objectives: a) list and count the strategies used by students at the beginning of their university education; b) categorize strategies; c) identify the most used and the most effective strategies; d) analyze other cognitive factors that may influence the use of strategies, such as musical and non-musical auditory memory span; e) analyze the impact of strategy usage and other variables on dictation performance levels; f) evaluate whether the dictation strategies and dictation results change after one ET course session. To reach these objectives, 66 students starting first year university music courses participated in this study. They described their strategies used during figured bass dictations, took two memory tests (musical and non-musical) and answered a questionnaire to indicate their gender, instrument, musical genre, and details about the duration of their musical studies. Firstly, this research allowed us to list and categorize the strategies used to solve figured bass dictations in a thorough way; Secondly, using correlations, analyses of variance and covariance, regressions, and T-tests, this study enabled us to understand the relationship of strategies to performance in harmonic dictation; and to verify if the use of strategies and performance in dictation changed over time, after taking university ear training courses. Moreover, we verified the relation between the performance in dictation and auditory capacities, as well as other variables such as instrument and age at start of musical studies. This thesis is organized into four chapters: Chapter 1 presents a literature review; Chapter 2, the methodology; Chapter 3, all qualitative and quantitative analyses done in response to the research questions; and at last, the discussion of results and conclusion

    Improving speech recognition accuracy for clinical conversations

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 73-74).Accurate and comprehensive data form the lifeblood of health care. Unfortunately, there is much evidence that current data collection methods sometimes fail. Our hypothesis is that it should be possible to improve the thoroughness and quality of information gathered through clinical encounters by developing a computer system that (a) listens to a conversation between a patient and a provider, (b) uses automatic speech recognition technology to transcribe that conversation to text, (c) applies natural language processing methods to extract the important clinical facts from the conversation, (d) presents this information in real time to the participants, permitting correction of errors in understanding, and (e) organizes those facts into an encounter note that could serve as a first draft of the note produces by the clinician. In this thesis, we present our attempts to measure the performances of two state-of-the-art automatic speech recognizers (ASRs) for the task of transcribing clinical conversations, and explore the potential ways of optimizing these software packages for the specific task. In the course of this thesis, we have (1) introduced a new method for quantitatively measuring the difference between two language models and showed that conversational and dictational speech have different underlying language models, (2) measured the perplexity of clinical conversations and dictations and shown that spontaneous speech has a higher perplexity than dictational speech, (3) improved speech recognition accuracy by language adaptation using a conversational corpus, and (4) introduced a fast and simple algorithm for cross talk elimination in two speaker settings.by Burkay GĂĽr.M.Eng
    • …
    corecore