9 research outputs found

    Розподілене комп’ютерне документування голосових мовних фонограм

    Get PDF
    Запропоновано підхід для розподіленого комп’ютерного документування мовних фонограм засідань. Проведено аналіз предметної області та зроблена постановка задачі. Побудована логічна модель інформаційної системи (ІС). Реалізований прототип ІС та окреслені необхідні дослідження для створення промислової версії ІС.n approach to the distributed application speech phonogram is suggested. The project domain analysis is given. The information systems logical model is constructed. The information system example is given

    РОЗПОДІЛЕНЕ КОМП’ЮТЕРНЕ ДОКУМЕНТУВАННЯ \ud ГОЛОСОВИХ МОВНИХ ФОНОГРАМ\ud

    Get PDF
    Запропоновано підхід для розподіленого комп’ютерного документування мовних фонограм засідань. Проведено аналіз предметної області та зроблена постановка задачі. Побудована логічна модель інформаційної системи (ІС). Реалізований прототип ІС та окреслені необхідні дослідження для створення промислової версії ІС. \ud An approach to the distributed application speech phonogram is suggested. The project domain analysis is given. The information systems logical model is constructed. The information system example is given. \u

    Improving speech recognition accuracy for clinical conversations

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 73-74).Accurate and comprehensive data form the lifeblood of health care. Unfortunately, there is much evidence that current data collection methods sometimes fail. Our hypothesis is that it should be possible to improve the thoroughness and quality of information gathered through clinical encounters by developing a computer system that (a) listens to a conversation between a patient and a provider, (b) uses automatic speech recognition technology to transcribe that conversation to text, (c) applies natural language processing methods to extract the important clinical facts from the conversation, (d) presents this information in real time to the participants, permitting correction of errors in understanding, and (e) organizes those facts into an encounter note that could serve as a first draft of the note produces by the clinician. In this thesis, we present our attempts to measure the performances of two state-of-the-art automatic speech recognizers (ASRs) for the task of transcribing clinical conversations, and explore the potential ways of optimizing these software packages for the specific task. In the course of this thesis, we have (1) introduced a new method for quantitatively measuring the difference between two language models and showed that conversational and dictational speech have different underlying language models, (2) measured the perplexity of clinical conversations and dictations and shown that spontaneous speech has a higher perplexity than dictational speech, (3) improved speech recognition accuracy by language adaptation using a conversational corpus, and (4) introduced a fast and simple algorithm for cross talk elimination in two speaker settings.by Burkay Gür.M.Eng

    Veröffentlichungen und Vorträge 2004 der Mitglieder der Fakultät für Informatik

    Get PDF

    Issues in Meeting Transcription -- The ISL Meeting Transcription System

    Get PDF
    This paper describes the Interactive Systems Lab's Meeting transcription system, which performs segmentation, speaker clustering as well as transcriptions of conversational meeting speech. The system described here was evaluated in NIST's RT04S "Meeting" speech evaluation and reached the lowest word error rates for the distant microphone conditions. Also, w

    Issues in Meeting Transcription -- The ISL Meeting Transcription System

    No full text
    This paper describes the Interactive Systems Lab's Meeting transcription system, which performs segmentation, speaker clustering as well as transcriptions of conversational meeting speech. The system described here was evaluated in NIST's RT04S "Meeting" speech evaluation and reached the lowest word error rates for the distant microphone conditions. Also, w
    corecore