9 research outputs found

    Automatic detection of hyperarticulated speech

    Get PDF
    Hyperarticulation is a speech adaptation that consists of adopting a clearer form of speech in an attempt to improve recognition levels. However, it has the opposite effect when talking to ASR systems, as they are not trained with such kind of speech. We present approaches for automatic detection of hyperarticulation, which can be used to improve the performance of spoken dialog systems. We performed experiments on Let’s Go data, using multiple feature sets and two classification approaches. Many relevant features are speaker dependent. Thus, we used the first turn in each dialog as the reference for the speaker, since it is typically not hyperarticulated. Our best results were above 80 % accuracy, which represents an improvement of at least 11.6 % points over previously obtained results on similar data. We also assessed the classifiers’ performance in scenarios where hyperarticulation is rare, achieving around 98 % accuracy using different confidence thresholds.info:eu-repo/semantics/acceptedVersio

    Incorporating a User Model to Improve Detection of Unhelpful Robot Answers

    Get PDF
    Dialogues with robots frequently exhibit social dialogue acts such as greeting, thanks, and goodbye. This opens the opportunity of using these dialogue acts for dialogue management, in particular for detecting misunderstandings. Our corpus analysis shows that the social dialogue acts have different scopes of their associations with the discourse features within the dialogue: greeting in the user’s first turn is associated with such distant, or global, features as the likelihood of having questions answered, persistence, and ending with bye. The user’s thanks turn, on the other hand, is strongly associated with the helpfulness of the preceding robot’s answer. We therefore interpret the greeting as a component of a user model that can provide information about the user’s traits and be associated with discourse features at various stages of the dialogue. We conduct a detailed analysis of the user’s thanking behavior and demonstrate that user’s thanks can be used in the detection of unhelpful robot’s answers. Incorporating the greeting information further improves the detection. We discuss possible applications of this work for human-robot dialogue management.

    USER-AWARENESS AND ADAPTATION IN CONVERSATIONAL AGENTS

    Get PDF
    This paper considers the research question of developing user-aware and adaptive conversational agents. The conversational agent is a system which is user-aware to the extent that it recognizes the user identity and his/her emotional states that are relevant in a given interaction domain. The conversational agent is user-adaptive to the extent that it dynamically adapts its dialogue behavior according to the user and his/her emotional state. The paper summarizes some aspects of our previous work and presents work-in-progress in the field of speech-based human-machine interaction. It focuses particularly on the development of speech recognition modules in cooperation with both modules for emotion recognition and speaker recognition, as well as the dialogue management module. Finally, it proposes an architecture of a conversational agent that integrates those modules and improves each of them based on some kind of synergies among themselves

    cROVER: Context-augmented Speech Recognizer based on Multi-Decoders' Output

    Get PDF
    The growing need for designing and implementing reliable voice-based human-machine interfaces has inspired intensive research work in the field of voice-enabled systems, and greater robustness and reliability are being sought for those systems. Speech recognition has become ubiquitous. Automated call centers, smart phones, dictation and transcription software are among the many systems currently being designed and involving speech recognition. The need for highly accurate and optimized recognizers has never been more crucial. The research community is very actively involved in developing powerful techniques to combine the existing feature extraction methods for a better and more reliable information capture from the analog signal, as well as enhancing the language and acoustic modeling procedures to better adapt for unseen or distorted speech signal patterns. Most researchers agree that one of the most promising approaches for the problem of reducing the Word Error Rate (WER) in large vocabulary speech transcription, is to combine two or more speech recognizers and then generate a new output, in the expectation that it provides a lower error rate. The research work proposed here aims at enhancing and boosting even further the performance of the well-known Recognizer Output Voting Error Reduction (ROVER) combination technique. This is done through its integration with an error filtering approach. The proposed system is referred to as cROVER, for context-augmented ROVER. The principal idea is to flag erroneous words following the combination of the word transition networks through a scanning process at each slot of the resulting network. This step aims at eliminating some transcription errors and thus facilitating the voting process within ROVER. The error detection technique consists of spotting semantic outliers in a given decoder's transcription output. Due to the fact that most error detection techniques suffer from a high false positive rate, we propose to combine the error filtering techniques to compensate for the poor performance of each of the individual error classifiers. Experimental results, have shown that the proposed cROVER approach is able to reduce the relative WER by almost 10% through adequate combination of speech decoders. The approaches proposed here are generic enough to be used by any number of speech decoders and with any type of error filtering technique. A novel voting mechanism has also been proposed. The new confidence-based voting scheme has been inspired from the cROVER approach. The main idea consists of using the confidence scores collected from the contextual analysis, during the scoring of each word in the transition network. The new voting scheme outperformed ROVER's original voting, by up to 16% in terms of relative WER reduction

    Audiovisual prosody in interaction

    Get PDF

    Predicting Automatic Speech Recognition Performance Using Prosodic Cues

    No full text
    In spoken dialogue systems, it is important for a system to know how likely a speech recognition hypothesis is to be correct, so it can reprorapt for fresh input, or, in cases where many errors have occurred, change its interaction strategy or switch the caller to a human attendant. We have discov- ered prosodic features which more accurately predict when a recognition hypothesis contains a word error than the acoustic confidence score thresholds tradi- tionally used in automatic speech recognition. We present analytic results indicating that there are significant prosodic differences between correctly and incorrectly recognized turns in the TOOT train information corpus. We then present machine learning results showing how the use of prosodic features to automatically predict correct versus incorrectly recognized turns improves over the use of acoustic confidence scores alone

    Toward Widely-Available and Usable Multimodal Conversational Interfaces

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 159-166).Multimodal conversational interfaces, which allow humans to interact with a computer using a combination of spoken natural language and a graphical interface, offer the potential to transform the manner by which humans communicate with computers. While researchers have developed myriad such interfaces, none have made the transition out of the laboratory and into the hands of a significant number of users. This thesis makes progress toward overcoming two intertwined barriers preventing more widespread adoption: availability and usability. Toward addressing the problem of availability, this thesis introduces a new platform for building multimodal interfaces that makes it easy to deploy them to users via the World Wide Web. One consequence of this work is City Browser, the first multimodal conversational interface made publicly available to anyone with a web browser and a microphone. City Browser serves as a proof-of-concept that significant amounts of usage data can be collected in this way, allowing a glimpse of how users interact with such interfaces outside of a laboratory environment. City Browser, in turn, has served as the primary platform for deploying and evaluating three new strategies aimed at improving usability. The most pressing usability challenge for conversational interfaces is their limited ability to accurately transcribe and understand spoken natural language. The three strategies developed in this thesis - context-sensitive language modeling, response confidence scoring, and user behavior shaping - each attack the problem from a different angle, but they are linked in that each critically integrates information from the conversational context.by Alexander Gruenstein.Ph.D

    Applications of Discourse Structure for Spoken Dialogue Systems

    Get PDF
    Language exhibits structure beyond the sentence level (e.g. the syntactic structure of a sentence). In particular, dialogues, either human-human or human-computer, have an inherent structure called the discourse structure. Models of discourse structure attempt to explain why a sequence of random utterances combines to form a dialogue or no dialogue at all. Due to the relatively simple structure of the dialogues that occur in the information-access domains of typical spoken dialogue systems (e.g. travel planning), discourse structure has often seen limited application in such systems. In this research, we investigate the utility of discourse structure for spoken dialogue systems in more complex domains, e.g. tutoring. This work was driven by two intuitions.First, we believed that the "position in the dialogue" is a critical information source for two tasks: performance analysis and characterization of dialogue phenomena. We define this concept using transitions in the discourse structure. For performance analysis, these transitions are used to create a number of novel factors which we show to be predictive of system performance. One of these factors informs a promising modification of our system which is implemented and compared with the original version of the system through a user study. Results show that the modification leads to objective improvements. For characterization of dialogue phenomena, we find statistical dependencies between discourse structure transitions and two dialogue phenomena which allow us to speculate where and why these dialogue phenomena occur and to better understand system behavior.Second, we believed that users will benefit from direct access to discourse structure information. We enable this through a graphical representation of discourse structure called the Navigation Map. We demonstrate the subjective and objective utility of the Navigation Map through two user studies.Overall, our work demonstrates that discourse structure is an important information source for designers of spoken dialogue systems
    corecore