3,711 research outputs found

    Detecting Inappropriate Clarification Requests in Spoken Dialogue Systems

    Get PDF
    Spoken Dialogue Systems ask for clarification when they think they have misunderstood users. Such requests may differ depending on the information the system believes it needs to clarify. However, when the error type or location is misidentified, clarification requests appear confusing or inappropriate. We describe a classifier that identifies inappropriate requests, trained on features extracted from user responses in laboratory studies. This classifier achieves 88.5% accuracy and .885 F-measure in detecting such requests

    "Are you telling me to put glasses on the dog?'' Content-Grounded Annotation of Instruction Clarification Requests in the CoDraw Dataset

    Full text link
    Instruction Clarification Requests are a mechanism to solve communication problems, which is very functional in instruction-following interactions. Recent work has argued that the CoDraw dataset is a valuable source of naturally occurring iCRs. Beyond identifying when iCRs should be made, dialogue models should also be able to generate them with suitable form and content. In this work, we introduce CoDraw-iCR (v2), which extends the existing iCR identifiers fine-grained information grounded in the underlying dialogue game items and possible actions. Our annotation can serve to model and evaluate repair capabilities of dialogue agents.Comment: Work in progres

    A Flexible Schema-Guided Dialogue Management Framework: From Friendly Peer to Virtual Standardized Cancer Patient

    Full text link
    A schema-guided approach to dialogue management has been shown in recent work to be effective in creating robust customizable virtual agents capable of acting as friendly peers or task assistants. However, successful applications of these methods in open-ended, mixed-initiative domains remain elusive -- particularly within medical domains such as virtual standardized patients, where such complex interactions are commonplace -- and require more extensive and flexible dialogue management capabilities than previous systems provide. In this paper, we describe a general-purpose schema-guided dialogue management framework used to develop SOPHIE, a virtual standardized cancer patient that allows a doctor to conveniently practice for interactions with patients. We conduct a crowdsourced evaluation of conversations between medical students and SOPHIE. Our agent is judged to produce responses that are natural, emotionally appropriate, and consistent with her role as a cancer patient. Furthermore, it significantly outperforms an end-to-end neural model fine-tuned on a human standardized patient corpus, attesting to the advantages of a schema-guided approach

    Dysfluencies as intra-utterance dialogue moves

    Get PDF
    Ginzburg J, Fernández R, Schlangen D. Dysfluencies as intra-utterance dialogue moves. Semantics and Pragmatics. 2014;7

    Expanding the Set of Pragmatic Considerations in Conversational AI

    Full text link
    Despite considerable performance improvements, current conversational AI systems often fail to meet user expectations. We discuss several pragmatic limitations of current conversational AI systems. We illustrate pragmatic limitations with examples that are syntactically appropriate, but have clear pragmatic deficiencies. We label our complaints as "Turing Test Triggers" (TTTs) as they indicate where current conversational AI systems fall short compared to human behavior. We develop a taxonomy of pragmatic considerations intended to identify what pragmatic competencies a conversational AI system requires and discuss implications for the design and evaluation of conversational AI systems.Comment: Pre-print version of paper that appeared at Multidisciplinary Perspectives on COntext-aware embodied Spoken Interactions (MP-COSIN) workshop at IEEE RO-MAN 202

    A framework for improving error detection and correction in spoken dialog systems

    Get PDF
    Despite The Recent Improvements In Performance And Reliably Of The Different Components Of Dialog Systems, It Is Still Crucial To Devise Strategies To Avoid Error Propagation From One Another. In This Paper, We Contribute A Framework For Improved Error Detection And Correction In Spoken Conversational Interfaces. The Framework Combines User Behavior And Error Modeling To Estimate The Probability Of The Presence Of Errors In The User Utterance. This Estimation Is Forwarded To The Dialog Manager And Used To Compute Whether It Is Necessary To Correct Possible Errors. We Have Designed An Strategy Differentiating Between The Main Misunderstanding And Non-Understanding Scenarios, So That The Dialog Manager Can Provide An Acceptable Tailored Response When Entering The Error Correction State. As A Proof Of Concept, We Have Applied Our Proposal To A Customer Support Dialog System. Our Results Show The Appropriateness Of Our Technique To Correctly Detect And React To Errors, Enhancing The System Performance And User Satisfaction.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485)
    corecore