18 research outputs found

    A study of turn-yelding cues in human-computer dialogue

    Get PDF
    Previous research has made signi cant advances in under- standing how humans manage to engage in smooth, well-coordinated conversation, and have unveiled the existence of several turn-yielding cues | lexico-syntactic, prosodic and acoustic events that may serve as predictors of conversational turn nality. These results have subse- quently aided the re nement of turn-taking pro ciency of spoken dia- logue systems. In this study, we nd empirical evidence in a corpus of human-computer dialogues that human users produce the same kinds of turn-yielding cues that have been observed in human-human interac- tions. We also show that a linear relation holds between the number of individual cues conjointly displayed and the likelihood of a turn switch.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    A study of turn-yelding cues in human-computer dialogue

    Get PDF
    Previous research has made signi cant advances in under- standing how humans manage to engage in smooth, well-coordinated conversation, and have unveiled the existence of several turn-yielding cues | lexico-syntactic, prosodic and acoustic events that may serve as predictors of conversational turn nality. These results have subse- quently aided the re nement of turn-taking pro ciency of spoken dia- logue systems. In this study, we nd empirical evidence in a corpus of human-computer dialogues that human users produce the same kinds of turn-yielding cues that have been observed in human-human interac- tions. We also show that a linear relation holds between the number of individual cues conjointly displayed and the likelihood of a turn switch.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

    Full text link
    Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.Comment: 10 pages. To appear in the Proceedings of the Conference on Human Factors in Computing Systems 2018 (CHI'18

    Speech & Multimodal Resources: the Herme Database of Spontaneous Multimodal Human-Robot Dialogues

    Get PDF
    This paper presents methodologies and tools for language resource (LR) construction. It describes a database of interactive speech collected over a three-month period at the Science Gallery in Dublin, where visitors could take part in a conversation with a robot. The system collected samples of informal, chatty dialogue – normally difficult to capture under laboratory conditions for human-human dialogue, and particularly so for human-machine interaction. The conversations were based on a script followed by the robot consisting largely of social chat with some task-based elements. The interactions were audio-visually recorded using several cameras together with microphones. As part of the conversation the participants were asked to sign a consent form giving permission to use their data for human-machine interaction research. The multimodal corpus will be made available to interested researchers and the technology developed during the three-month exhibition is being extended for use in education and assisted-living applications

    User-Adaptive A Posteriori Restoration for Incorrectly Segmented Utterances in Spoken Dialogue Systems

    Get PDF
    Ideally, the users of spoken dialogue systems should be able to speak at their own tempo. Thus, the systems needs to interpret utterances from various users correctly, even when the utterances contain pauses. In response to this issue, we propose an approach based on a posteriori restoration for incorrectly segmented utterances. A crucial part of this approach is to determine whether restoration is required. We use a classification-based approach, adapted to each user. We focus on each user’s dialogue tempo, which can be obtained during the dialogue, and determine the correlation between each user’s tempo and the appropriate thresholds for classification. A linear regression function used to convert the tempos into thresholds is also derived. Experimental results show that the proposed user adaptation approach applied to two restoration classification methods, thresholding and decision trees, improves classification accuracies by 3.0% and 7.4%, respectively, in cross validation

    A study of turn-yelding cues in human-computer dialogue

    Get PDF
    Previous research has made signi cant advances in under- standing how humans manage to engage in smooth, well-coordinated conversation, and have unveiled the existence of several turn-yielding cues | lexico-syntactic, prosodic and acoustic events that may serve as predictors of conversational turn nality. These results have subse- quently aided the re nement of turn-taking pro ciency of spoken dia- logue systems. In this study, we nd empirical evidence in a corpus of human-computer dialogues that human users produce the same kinds of turn-yielding cues that have been observed in human-human interac- tions. We also show that a linear relation holds between the number of individual cues conjointly displayed and the likelihood of a turn switch.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Incorporating a User Model to Improve Detection of Unhelpful Robot Answers

    Get PDF
    Dialogues with robots frequently exhibit social dialogue acts such as greeting, thanks, and goodbye. This opens the opportunity of using these dialogue acts for dialogue management, in particular for detecting misunderstandings. Our corpus analysis shows that the social dialogue acts have different scopes of their associations with the discourse features within the dialogue: greeting in the user’s first turn is associated with such distant, or global, features as the likelihood of having questions answered, persistence, and ending with bye. The user’s thanks turn, on the other hand, is strongly associated with the helpfulness of the preceding robot’s answer. We therefore interpret the greeting as a component of a user model that can provide information about the user’s traits and be associated with discourse features at various stages of the dialogue. We conduct a detailed analysis of the user’s thanking behavior and demonstrate that user’s thanks can be used in the detection of unhelpful robot’s answers. Incorporating the greeting information further improves the detection. We discuss possible applications of this work for human-robot dialogue management.
    corecore