17,910 research outputs found

    Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

    Full text link
    Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.Comment: 10 pages. To appear in the Proceedings of the Conference on Human Factors in Computing Systems 2018 (CHI'18

    Communicating across cultures in cyberspace

    Get PDF

    A Personalized System for Conversational Recommendations

    Full text link
    Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system

    Designing Interfaces for Human-Computer Communication: An On-Going Collection of Considerations

    Full text link
    While we do not always use words, communicating what we want to an AI is a conversation -- with ourselves as well as with it, a recurring loop with optional steps depending on the complexity of the situation and our request. Any given conversation of this type may include: (a) the human forming an intent, (b) the human expressing that intent as a command or utterance, (c) the AI performing one or more rounds of inference on that command to resolve ambiguities and/or requesting clarifications from the human, (d) the AI showing the inferred meaning of the command and/or its execution on current and future situations or data, (e) the human hopefully correctly recognizing whether the AI's interpretation actually aligns with their intent. In the process, they may (f) update their model of the AI's capabilities and characteristics, (g) update their model of the situations in which the AI is executing its interpretation of their intent, (h) confirm or refine their intent, and (i) revise their expression of their intent to the AI, where the loop repeats until the human is satisfied. With these critical cognitive and computational steps within this back-and-forth laid out as a framework, it is easier to anticipate where communication can fail, and design algorithms and interfaces that ameliorate those failure points

    ECA gesture strategies for robust SLDSs

    Get PDF
    This paper explores the use of embodied conversational agents (ECAs) to improve interaction with spoken language dialogue systems (SLDSs). For this purpose we have identified typical interaction problems with SLDSs and associated with each of them a particular ECA gesture or behaviour. User tests were carried out dividing the test users into two groups, each facing a different interaction metaphor (one with an ECA in the interface, and the other implemented only with voice). Our results suggest user frustration is lower when an ECA is present in the interface, and the dialogue flows more smoothly, partly due to the fact that users are better able to tell when they are expected to speak and whether the system has heard and understood. The users’ overall perceptions regarding the system were also affected, and interaction seems to be more enjoyable with an ECA than without it
    corecore