32 research outputs found

    How Expressiveness of a Robotic Tutor is Perceived by Children in a Learning Environment

    Get PDF
    We present a study investigating the expressiveness of two different types of robots in a tutoring task. The robots used were i) the EMYS robot, with facial expression capabilities, and ii) the NAO robot, without facial expressions but able to perform expressive gestures. Preliminary results show that the NAO robot was perceived to be more friendly, pleasant and empathic than the EMYS robot as a tutor in a learning environment

    The interaction between voice and appearance in the embodiment of a robot tutor

    Get PDF
    Robot embodiment is, by its very nature, holistic and understanding how various aspects contribute to the user perception of the robot is non-trivial. A study is presented here that investigates whether there is an interaction effect between voice and other aspects of embodiment, such as movement and appearance, in a pedagogical setting. An on-line study was distributed to children aged 11–17 that uses a modified Godspeed questionnaire. We show an interaction effect between the robot embodiment and voice in terms of perceived lifelikeness of the robot. Politeness is a key strategy used in learning and teaching, and here an effect is also observed for perceived politeness. Interestingly, participants’ overall preference was for embodiment combinations that are deemed polite and more like a teacher, but are not necessarily the most lifelike. From these findings, we are able to inform the design of robotic tutors going forward

    Groups of humans and robots: Understanding membership preferences and team formation

    Get PDF
    Although groups of robots are expected to interact with groups of humans in the near future, research related to teams of humans and robots still appears scarce. This paper contributes to the study of human-robot teams by investigating how humans choose robots to partner with in a multi-party game context. The novelty of our work concerns the successful design and development of two social robots that are able to autonomously interact with a group of two humans in the execution of a social and entertaining task. The development of these two characters was motivated by psychological research on learning goal theory, according to which we interpret and approach a given task differently depending on our learning goal (oriented more towards either relationship building or competition). Thus, we developed two robotic characters implemented in two robots: Emys (competitive robot) and Glin (relationship-driven robot). In our study, a group of four (two humans and two autonomous robots) engaged in a social and entertaining card game. Our study yields several important conclusions regarding groups of humans and robots. (1) When a partner is chosen without previous partnering experience, people tend to prefer robots with relationship-driven characteristics as their partners compared with competitive robots. (2) After some partnering experience has been gained, the choice becomes less clear and additional driving factors emerge: (2a) participants with higher levels of competitiveness (personal characteristics) tend to prefer Emys, whereas those with lower levels prefer Glin, and (2b) the choice of which robot to partner with also depends on team performance, with the winning team being the preferred choice.info:eu-repo/semantics/publishedVersio

    “I Choose... YOU!” Membership preferences in human–robot teams

    Get PDF
    Although groups of robots are expected to interact with groups of humans in the near future, research related to teams of humans and robots is still scarce. This paper contributes to the study of human–robot teams by describing the development of two autonomous robotic partners and by investigating how humans choose robots to partner with in a multi-party game context. Our work concerns the successful development of two autonomous robots that are able to interact with a group of two humans in the execution of a task for social and entertainment purposes. The creation of these two characters was motivated by psychological research on learning goal theory, according to which we interpret and approach a given task differently depending on our learning goal. Thus, we developed two robotic characters implemented in two robots: Emys (a competitive robot, based on characteristics related to performance-orientation goals) and Glin (a relationship-driven robot, based on characteristics related to learning-orientation goals). In our study, a group of four (two humans and two autonomous robots) engaged in a card game for social and entertainment purposes. Our study yields several important conclusions regarding groups of humans and robots. (1) When a partner is chosen without previous partnering experience, people tend to prefer robots with relationship-driven characteristics as their partners compared with competitive robots. (2) After some partnering experience has been gained, the choice becomes less clear, and additional driving factors emerge as follows: (2a) participants with higher levels of competitiveness (personal characteristics) tend to prefer Emys, whereas those with lower levels prefer Glin, and (2b) the choice of which robot to partner with also depends on team performance, with the winning team being the preferred choice.info:eu-repo/semantics/acceptedVersio

    How Can a Robot Signal Its Incapability to Perform a Certain Task to Humans in an Acceptable Manner?

    Get PDF
    In this paper, a robot that is using politeness to overcome its incapability to serve is presented. The mobile robot “Alex” is interacting with human office colleagues in their environment and delivers messages, phone calls, and companionship. The robot's battery capacity is not sufficient to survive a full working day. Thus, the robot needs to recharge during the day. By doing so it is unavailable for tasks that involve movement. The study presented in this paper supports the idea that an incapability of fullfiling an appointed task can be overcome by politeness and showing appropriate behaviour. The results, reveal that, even the simple adjustment of spoken utterances towards a more polite phrasing can change the human's perception of the robot companion. This change in the perception can be made visible by analysing the human's behaviour towards the robot

    Cheating with robots: How at ease do they make us feel?

    Get PDF
    People are not perfect, and if given the chance, some will be dishonest with no regrets. Some people will cheat just a little to gain some advantage, and others will not do it at all. With the prospect of more human-robot interactions in the future, it will become very important to understand which kind of roles a robot can have in the regulation of cheating behavior. We investigated whether people will cheat while in the presence of a robot and to what extent this depends on the role the robot plays. We ran a study to test cheating behavior with a die task, and allocated people to one of the following conditions: 1) participants were alone in the room while doing the task; 2) with a robot with a vigilant role or 3) with a robot that had a supporting role in the task, accompanying and giving instructions. Our results showed that participants cheated significantly more than chance when they were alone or with the robot giving instructions. In contrast, cheating could not be proven when the robot presented a vigilant role. This study has implications for human-robot interaction and for the deployment of autonomous robots in sensitive roles in which people may be prone to dishonest behavior.info:eu-repo/semantics/acceptedVersio

    Mimicking a robot: Facial EMG in response to emotional robotic facial expressions

    Get PDF
    Humans tend to anthropomorphize i.e., to attribute human-like characteristics (e.g. motivations, intentions, emotions) to non-humans. This suggests that we can interact with non-humans (televisions, computers, robots) in a similar way we interact with humans. Robots, in particular, have physical presence and can be programmed to display social interaction capabilities, i.e. to be social robots, amplifying those similarities. Past studies have shown that social robots in negative situations tend to elicit strong emotional responses and empathy in humans. However, it remains to be tested whether empathy can be felt towards a social robot, set in a situation of positive social interaction. We proposed that facial mimicry, one indicator of empathy, may occur towards a robot in a positive social context, i.e. while the robot is playing a board game with human opponents. Fifty-nine participants (46 females), aged 17 to 27 years (M=19.56, SD=2.11) were exposed to videos of a robotic head (EMYS, the EMotive headY System), previously programmed to display six emotional expressions (joy, surprise, anger, disgust, fear, sadness) and a neutral expression, while playing a board game. EMYS’s facial expressions were shown in two blocks: in the first, no social context was provided and sound was omitted; in the second, a positive social context was provided, which included sound of verbal interaction with humans. In each block, 14 videos were randomly presented. Facial electromyography (fEMG) activity, in response to EMYS’s facial expressions, was measured over the corrugator supercilii and zygomaticus major muscles. fEMG responses were calculated as difference from stimulus presentation to 1 sec baseline. Changes in fEMG reactivity, between conditions, were analyzed comparing fEMG responses to robotic emotional expressions with responses to robotic neutral expressions. In the positive social context condition, results revealed an overall reduction of corrugator supercilii reactivity for the majority of negative emotional expressions (except anger). There was also a significant reduction of the zygomaticus major activity to surprise, compared to neutral, in the positive social context. Overall, our results suggest the important role of the social context in our physiological responses to a robot, and more specifically a reduction of emotional negativity to non-threatening robotic facial expressions, displayed in a positive social context.info:eu-repo/semantics/publishedVersio
    corecore