43 research outputs found

    Understanding Anthropomorphism in Service Provision: A Meta-Analysis of Physical Robots, Chatbots, and other AI

    Get PDF
    An increasing number of firms introduce service robots, such as physical robots and virtual chatbots, to provide services to customers. While some firms use robots that resemble human beings by looking and acting humanlike to increase customers’ use intention of this technology, others employ machinelike robots to avoid uncanny valley effects, assuming that very humanlike robots may induce feelings of eeriness. There is no consensus in the service literature regarding whether customers’ anthropomorphism of robots facilitates or constrains their use intention. The present meta-analysis synthesizes data from 11,053 individuals interacting with service robots reported in 108 independent samples. The study synthesizes previous research to clarify this issue and enhance understanding of the construct. We develop a comprehensive model to investigate relationships between anthropomorphism and its antecedents and consequences. Customer traits and predispositions (e.g., computer anxiety), sociodemographics (e.g., gender), and robot design features (e.g., physical, nonphysical) are identified as triggers of anthropomorphism. Robot characteristics (e.g., intelligence) and functional characteristics (e.g., usefulness) are identified as important mediators, although relational characteristics (e.g., rapport) receive less support as mediators. The findings clarify contextual circumstances in which anthropomorphism impacts customer intention to use a robot. The moderator analysis indicates that the impact depends on robot type (i.e., robot gender) and service type (i.e., possession-processing service, mental stimulus-processing service). Based on these findings, we develop a comprehensive agenda for future research on service robots in marketing

    Socially assistive robots : the specific case of the NAO

    Get PDF
    Numerous researches have studied the development of robotics, especially socially assistive robots (SAR), including the NAO robot. This small humanoid robot has a great potential in social assistance. The NAO robot’s features and capabilities, such as motricity, functionality, and affective capacities, have been studied in various contexts. The principal aim of this study is to gather every research that has been done using this robot to see how the NAO can be used and what could be its potential as a SAR. Articles using the NAO in any situation were found searching PSYCHINFO, Computer and Applied Sciences Complete and ACM Digital Library databases. The main inclusion criterion was that studies had to use the NAO robot. Studies comparing it with other robots or intervention programs were also included. Articles about technical improvements were excluded since they did not involve concrete utilisation of the NAO. Also, duplicates and articles with an important lack of information on sample were excluded. A total of 51 publications (1895 participants) were included in the review. Six categories were defined: social interactions, affectivity, intervention, assisted teaching, mild cognitive impairment/dementia, and autism/intellectual disability. A great majority of the findings are positive concerning the NAO robot. Its multimodality makes it a SAR with potential

    Addressing joint action challenges in HRI: Insights from psychology and philosophy

    Get PDF
    The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.French National Research Agency (ANR) ANR-16-CE33-0017 ANR-17-EURE-0017 FrontCog ANR-10-IDEX-0001-02 PSLJuan de la Cierva-Incorporacion grant IJC2019-040199-ISpanish Government PID2019-108870GB-I00 PID2019-109764RB-I0

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Learning Data-Driven Models of Non-Verbal Behaviors for Building Rapport Using an Intelligent Virtual Agent

    Get PDF
    There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14]

    Artificial Emotional Intelligence in Socially Assistive Robots

    Get PDF
    Artificial Emotional Intelligence (AEI) bridges the gap between humans and machines by demonstrating empathy and affection towards each other. This is achieved by evaluating the emotional state of human users, adapting the machine’s behavior to them, and hence giving an appropriate response to those emotions. AEI is part of a larger field of studies called Affective Computing. Affective computing is the integration of artificial intelligence, psychology, robotics, biometrics, and many more fields of study. The main component in AEI and affective computing is emotion, and how we can utilize emotion to create a more natural and productive relationship between humans and machines. An area in which AEI can be particularly beneficial is in building machines and robots for healthcare applications. Socially Assistive Robotics (SAR) is a subfield in robotics that aims at developing robots that can provide companionship to assist people with social interaction and companionship. For example, residents living in housing designed for older adults often feel lonely, isolated, and depressed; therefore, having social interaction and mental stimulation is critical to improve their well-being. Socially Assistive Robots are designed to address these needs by monitoring and improving the quality of life of patients with depression and dementia. Nevertheless, developing robots with AEI that understand users’ emotions and can reply to them naturally and effectively is in early infancy, and much more research needs to be carried out in this field. This dissertation presents the results of my work in developing a social robot, called Ryan, equipped with AEI for effective and engaging dialogue with older adults with depression and dementia. Over the course of this research there has been three versions of Ryan. Each new version of Ryan is created using the lessons learned after conducting the studies presented in this dissertation. First, two human-robot-interaction studies were conducted showing validity of using a rear-projected robot to convey emotion and intent. Then, the feasibility of using Ryan to interact with older adults is studied. This study investigated the possible improvement of the quality of life of older adults. Ryan the Companionbot used in this project is a rear-projected lifelike conversational robot. Ryan is equipped with many features such as games, music, video, reminders, and general conversation. Ryan engages users in cognitive games and reminiscence activities. A pilot study was conducted with six older adults with early-stage dementia and/or depression living in a senior living facility. Each individual had 24/7 access to a Ryan in his/her room for a period of 4-6 weeks. The observations of these individuals, interviews with them and their caregivers, and analysis of their interactions during this period revealed that they established rapport with the robot and greatly valued and enjoyed having a companionbot in their room. A multi-modal emotion recognition algorithm was developed as well as a multi-modal emotion expression system. These algorithms were then integrated into Ryan. To engage the subjects in a more empathic interaction with Ryan, a corpus of dialogues on different topics were created by English major students. An emotion recognition algorithm was designed and implemented and then integrated into the dialogue management system to empathize with users based on their perceived emotion. This study investigates the effects of this emotionally intelligent robot on older adults in the early stage of depression and dementia. The results of this study suggest that Ryan equipped with AEI is more engaging, likable, and attractive to users than Ryan without AEI. The long-term effect of the last version of Ryan (Ryan V3.0) was studied in a study involving 17 subjects from 5 different senior care facilities. The participants in this study experienced a general improvement in their cognitive and depression scores

    Exploring Virtual Reality and Doppelganger Avatars for the Treatment of Chronic Back Pain

    Get PDF
    Cognitive-behavioral models of chronic pain assume that fear of pain and subsequent avoidance behavior contribute to pain chronicity and the maintenance of chronic pain. In chronic back pain (CBP), avoidance of movements often plays a major role in pain perseverance and interference with daily life activities. In treatment, avoidance is often addressed by teaching patients to reduce pain behaviors and increase healthy behaviors. The current project explored the use of personalized virtual characters (doppelganger avatars) in virtual reality (VR), to influence motor imitation and avoidance, fear of pain and experienced pain in CBP. We developed a method to create virtual doppelgangers, to animate them with movements captured from real-world models, and to present them to participants in an immersive cave virtual environment (CAVE) as autonomous movement models for imitation. Study 1 investigated interactions between model and observer characteristics in imitation behavior of healthy participants. We tested the hypothesis that perceived affiliative characteristics of a virtual model, such as similarity to the observer and likeability, would facilitate observers’ engagement in voluntary motor imitation. In a within-subject design (N=33), participants were exposed to four virtual characters of different degrees of realism and observer similarity, ranging from an abstract stickperson to a personalized doppelganger avatar designed from 3d scans of the observer. The characters performed different trunk movements and participants were asked to imitate these. We defined functional ranges of motion (ROM) for spinal extension (bending backward, BB), lateral flexion (bending sideward, BS) and rotation in the horizontal plane (RH) based on shoulder marker trajectories as behavioral indicators of imitation. Participants’ ratings on perceived avatar appearance were recorded in an Autonomous Avatar Questionnaire (AAQ), based on an explorative factor analysis. Linear mixed effects models revealed that for lateral flexion (BS), a facilitating influence of avatar type on ROM was mediated by perceived identification with the avatar including avatar likeability, avatar-observer-similarity and other affiliative characteristics. These findings suggest that maximizing model-observer similarity may indeed be useful to stimulate observational modeling. Study 2 employed the techniques developed in study 1 with participants who suffered from CBP and extended the setup with real-world elements, creating an immersive mixed reality. The research question was whether virtual doppelgangers could modify motor behaviors, pain expectancy and pain. In a randomized controlled between-subject design, participants observed and imitated an avatar (AVA, N=17) or a videotaped model (VID, N=16) over three sessions, during which the movements BS and RH as well as a new movement (moving a beverage crate) were shown. Again, self-reports and ROMs were used as measures. The AVA group reported reduced avoidance with no significant group differences in ROM. Pain expectancy increased in AVA but not VID over the sessions. Pain and limitations did not significantly differ. We observed a moderation effect of group, with prior pain expectancy predicting pain and avoidance in the VID but not in the AVA group. This can be interpreted as an effect of personalized movement models decoupling pain behavior from movement-related fear and pain expectancy by increasing pain tolerance and task persistence. Our findings suggest that personalized virtual movement models can stimulate observational modeling in general, and that they can increase pain tolerance and persistence in chronic pain conditions. Thus, they may provide a tool for exposure and exercise treatments in cognitive behavioral treatment approaches to CBP
    corecore