6 research outputs found

    A field study on Polish customers' attitude towards a service robot in a cafe

    Full text link
    More and more stores in Poland are adopting robots as customer assistants or promotional tools. However, customer attitudes to such novelty remain unexplored. This study focused on the role of social robots in self-service cafes. This domain has not been explored in Poland before, and there is not much research in other countries as well. We conducted a field study in two cafes with a teleoperated robot Nao, which sat next to the counter serving as an assistant to a human barista. We observed customer behavior, conducted semi-structured interviews and questionnaires with the customers. The results show that Polish customers are neutral and insecure about robots. However, they do not exhibit a total dislike of these technologies. We considered three stages of the interaction and identified features of each stage that need to be designed carefully to yield user satisfaction.Comment: 14 pages, 1 figur

    Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction

    Get PDF
    The paper investigates the effects of a robot's on-line feedback during a tutoring situation with a human tutor. Analysis is based on a study conducted with an iCub robot that autonomously generates its feedback (gaze, pointing gesture) based on the system's perception of the tutor's actions using the idea of reciprocity of actions. Sequential micro-analysis of two opposite cases reveals how the robot's behavior (responsive vs. non-responsive) pro-actively shapes the tutor's conduct and thus co-produces the way in which it is being tutored. A dialogic and a monologic tutoring style are distinguished. The first 20 seconds of an encounter are found to shape the user's perception and expectations of the system's competences and lead to a relatively stable tutoring style even if the robot's reactivity and appropriateness of feedback changes

    Robot feedback shapes the tutor's presentation. How a robot's online gaze strategies lead to micro-adaptation of the human's conduct

    Get PDF
    Pitsch K, Vollmer A-L, Muehlig M. Robot feedback shapes the tutor's presentation. How a robot's online gaze strategies lead to micro-adaptation of the human's conduct. Interaction Studies. 2013;14(2):268-296.The paper investigates the effects of a humanoid robot's online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot's gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical "motionese" parameters (pauses, speed, and height of motion). We argue that a robot - when using adequate online feedback strategies - has at its disposal an important resource with which it could pro-actively shape the tutor's presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic "Social Learning" in that they suggest to consider human and robot as one interactional learning system

    Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze

    Get PDF
    Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B. Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze. Interaction Studies. 2014;15(1):55-98.Research of tutoring in parent-infant interaction has shown that tutors - when presenting some action - modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8 to 11 month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges

    "Can you answer questions, Flobi?": Interactionally defining a robot’s competence as a fitness instructor

    Get PDF
    Süssenbach L, Pitsch K, Berger I, Riether N, Kummert F. "Can you answer questions, Flobi?": Interactionally defining a robot’s competence as a fitness instructor. In: Proceedings of the 21th IEEE International Symposium in Robot and Human Interactive Communication (RO-MAN 2012). 2012.Users draw on four sources to judge a robot’s competence: (1) the robot’s voice, (2) physical appearance of and (3) the interaction experience with the robot but also (4) the relationship between the robot’s physical appearance and its conduct. Furthermore, most approaches in social robotics have an outcome-oriented focus and thus use questionnaires to measure a global evaluation of the robot after interaction took place. The present research takes a process-oriented approach to explore the factors relevant in the formation of users’ attitudes toward the robot. To do so, an ethnographic approach (Conversation Analysis) was employed to analyze the micro-coordination between user and robot. We report initial findings from a study in which a robot took the role of a fitness instructor. Our results emphasize that the participant judges step-by-step the robot’s capabilities and differentiates its competence on two levels regarding to the robot’s role: a robot as a (1) social/interactional co-participant and as a (2) fitness instructor

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task
    corecore