2 research outputs found

    Identifying problematic dialogs in a human-computer dialog system

    Get PDF
    In this thesis, we present the development of an automatic system that identifies problematic dialogues in the context of a Human-Computer Dialog System (HCDS). The system we developed is a type of application in Pattern Classification domain. In this work, we propose a probabilistic approach that predicts user satisfaction for each turn of dialogue. To do so, all the features used in our system are automatically extracted from the utterance. A robust and fast machine learning scheme, Hidden Markov Model (HMM) is used to build our desired system. In order to evaluate the system performance, we experimented on two publicly distributed corpora: DARPA Communicator 2000 and 2001. We evaluated the system using a 10-fold stratified cross-validation. Our results show that the system could be used in real life applications

    Automatic Detection of Poor Speech Recognition at the Dialogue Level

    No full text
    The dialogue strategies used by a spoken dialogue system strongly influence performance and user satisfaction. An ideal system would not use a single fixed strategy, but would adapt to the circumstances at hand. To do so, a system must be able to identify dialogue properties that suggest adaptation. This paper focuses on identifying situations where the speech recognizer is performing poorly. We adopt a machine learning approach to learn rules from a dialogue corpus for identifying these situations. Our results show a significant improvement over the baseline and illustrate that both lower-level acoustic features and higher-level dialogue features can affect the performance of the learning algorithm
    corecore