21,817 research outputs found

    Improving Search through A3C Reinforcement Learning based Conversational Agent

    Full text link
    We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states.Comment: 17 pages, 7 figure

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Conversational strategies : impact on search performance in a goal-oriented task

    Get PDF
    Conversational search relies on an interactive, natural language exchange between a user, who has an information need, and a search system, which elicits and reveals information. Prior research posits that due to the non-persistent nature of speech, conversational agents (CAs) should support users in their search task by: (1) actively suggesting query reformulations, and (2) providing summaries of the available options. Currently, however, the majority of CAs are passive (i.e. lack interaction initiative) and respond by providing lists of results – consequently putting more cognitive strain on users. To investigate the potential benefit of active search support and summarising search results, we performed a lab-based user study, where twenty-four participants undertook four goal-oriented search tasks (booking a flight). A 2x2 within subjects design was used where the CAs strategies varied with respect to elicitation (Passive vs Active) and revealment (Listing vs. Summarising). Results show that when the CA’s elicitation was Active, participant’s task performance improved significantly. Confirming speculations that Active elicitation can lead to improved outcomes for end-users. A similar trend, though to the lesser extent, was observed for revealment – where Summarising results led to better performance than Listing them. These findings are the beginning of, but also highlight the need for, research into design and evaluation of conversational strategies that active or pro-active CAs should employ to support better search performance
    corecore