12,530 research outputs found

    A Personalized System for Conversational Recommendations

    Full text link
    Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system

    Proceedings of the international conference on cooperative multimodal communication CMC/95, Eindhoven, May 24-26, 1995:proceedings

    Get PDF

    Collaborating on Referring Expressions

    Full text link
    This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation.Comment: 32 pages, 2 figures, to appear in Computation Linguistics 21-

    Automated Dialogue Generation for Behavior Intervention on Mobile Devices

    Get PDF
    AbstractCommunication in the form of dialogues between a virtual coach and a human patient (coachee) is one of the pillars in an intervention app for smartphones. The virtual coach is considered as a cooperative partner that supports the individual with various exercises for a behavior intervention therapy. To perform its supportive behavior, the coach follows a certain interaction model and its requirements, such as alignment, mutual commitment and adaptation. In this paper, we propose E-Coach MarkUp Language (ECML), a standard XML specification for scripting discourses that define how the virtual coach maintains a dialogue with a coachee following the interaction model. The format of the language allows messages to be tailored at a fine-grained level. Each sentence is synthesized based on the inferred goals of the coaching process and the current beliefs of the user, incorporating everything that has been said previously in the conversation. The design enables inexpensive implementation on mobile devices for a flexible, seamless coaching dialogue. With expert-based evaluations, we validated the language using scenarios on implemented ECML in the field of insomnia therapy

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access
    corecore