208 research outputs found

    Comparing User Responses to Limited and Flexible Interaction in a Conversational Interface

    Get PDF
    The principles governing written communication have been well studied, and well incorporated in interactive computer systems. However, the role of spoken language and in human-computer interaction, while an increasingly popular modality, still needs to be explored further [3]. Evidence suggests that this technology must further evolve in order to support more "natural" conversations [2], and that the use of speech interfaces is correlated with a high cognitive demand and attention [4]. In the context of spoken dialogue systems, a continuum has long been identified between "systeminitiative" interactions, where the system is in complete control of the overall interaction and the user answers a series of prescribed questions, and "user-initiative" interactions, where the user is free to say anything and the system must respond [5]. However, much of the work in this area predates the recent explosive growth of conversational interfaces

    Bayesian Inference of Self-intention Attributed by Observer

    Full text link
    Most of agents that learn policy for tasks with reinforcement learning (RL) lack the ability to communicate with people, which makes human-agent collaboration challenging. We believe that, in order for RL agents to comprehend utterances from human colleagues, RL agents must infer the mental states that people attribute to them because people sometimes infer an interlocutor's mental states and communicate on the basis of this mental inference. This paper proposes PublicSelf model, which is a model of a person who infers how the person's own behavior appears to their colleagues. We implemented the PublicSelf model for an RL agent in a simulated environment and examined the inference of the model by comparing it with people's judgment. The results showed that the agent's intention that people attributed to the agent's movement was correctly inferred by the model in scenes where people could find certain intentionality from the agent's behavior

    Shaping Robot Gestures to Shape Users' Perception: the Effect of Amplitude and Speed on Godspeed Ratings

    Get PDF
    This work analyses the relationship between the way robots gesture and the way those gestures are perceived by human users. In particular, this work shows how modifying the amplitude and speed of a gesture affect the Godspeed scores given to those gestures, by means of an experiment involving 45 stimuli and 30 observers. The results suggest that shaping gestures aimed at manifesting the inner state of the robot (e.g., cheering or showing disappointment) tends to change the perception of Animacy (the dimension that accounts for how driven by endogenous factors the robot is perceived to be), while shaping gestures aimed at achieving an interaction effect (e.g., engaging and disengaging) tends to change the perception of Anthropomorphism, Likeability and Perceived Safety (the dimensions that account for the social aspects of the perception)

    Getting to know Pepper : Effects of people’s awareness of a robot’s capabilities on their trust in the robot

    Get PDF
    © 2018 Association for Computing MachineryThis work investigates how human awareness about a social robot’s capabilities is related to trusting this robot to handle different tasks. We present a user study that relates knowledge on different quality levels to participant’s ratings of trust. Secondary school pupils were asked to rate their trust in the robot after three types of exposures: a video demonstration, a live interaction, and a programming task. The study revealed that the pupils’ trust is positively affected across different domains after each session, indicating that human users trust a robot more the more awareness about the robot they have

    Human-Agent Interaction Model Learning based on Crowdsourcing

    Get PDF
    Missions involving humans interacting with automated systems become increasingly common. Due to the non-deterministic behavior of the human and possibly high risk of failing due to human factors, such an integrated system should react smartly by adapting its behavior when necessary. A promise avenue to design an efficient interaction-driven system is the mixed-initiative paradigm. In this context, this paper proposes a method to learn the model of a mixed-initiative human-robot mission. The first step to set up a reliable model is to acquire enough data. For this aim a crowdsourcing campaign was conducted and learning algorithms were trained on the collected data in order to model the human-robot mission and to optimize a supervision policy with a Markov Decision Process (MDP). This model takes into account the actions of the human operator during the interaction as well as the state of the robot and the mission. Once such a model has been learned, the supervision strategy can be optimized according to a criterion representing the goal of the mission. In this paper, the supervision strategy concerns the robot’s operating mode. Simulations based on the MDP model show that planning under uncertainty solvers can be used to adapt robot’s mode according to the state of the human-robot system. The optimization of the robot’s operation mode seems to be able to improve the team’s performance. The dataset that comes from crowdsourcing is therefore a material that can be useful for research in human-machine interaction, that is why it has been made available on our website

    The Effect of Embodied Anthropomorphism of Personal Assistants on User Perceptions

    Get PDF
    • …
    corecore