1,816 research outputs found
The perception of emotion in artificial agents
Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
An emotion and memory model for social robots : a long-term interaction
In this thesis, we investigate the role of emotions and memory in social robotic companions. In particular, our aim is to study the effect of an emotion and memory model towards sustaining engagement and promoting learning in a long-term interaction. Our Emotion and Memory model was based on how humans create memory under various emotional events/states. The model enabled the robot to create a memory account of user's emotional events during a long-term child-robot interaction. The robot later adapted its behaviour through employing the developed memory in the following interactions with the users. The model also had an autonomous decision-making mechanism based on reinforcement learning to select behaviour according to the user preference measured through user's engagement and learning during the task. The model was implemented on the NAO robot in two different educational setups. Firstly, to promote user's vocabulary learning and secondly, to inform how to calculate area and perimeter of regular and irregular shapes. We also conducted multiple long-term evaluations of our model with children at the primary schools to verify its impact on their social engagement and learning. Our results showed that the behaviour generated based on our model was able to sustain social engagement. Additionally, it also helped children to improve their learning. Overall, the results highlighted the benefits of incorporating memory during child-Robot Interaction for extended periods of time. It promoted personalisation and reflected towards creating a child-robot social relationship in a long-term interaction
Views from within a narrative : Evaluating long-term human-robot interaction in a naturalistic environment using open-ended scenarios
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Date of acceptance: 16/06/2014This article describes the prototyping of humanârobot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenarioPeer reviewe
Affect Recognition in Autism: a single case study on integrating a humanoid robot in a standard therapy.
Autism Spectrum Disorder (ASD) is a multifaceted developmental disorder that comprises a mixture of social impairments, with deficits in many areas including the theory of mind, imitation, and communication. Moreover, people with autism have difficulty in recognising and understanding emotional expressions. We are currently working on integrating a humanoid robot within the standard clinical treatment offered to children with ASD to support the therapists. In this article, using the A-B-A' single case design, we propose a robot-assisted affect recognition training and to present the results on the childâs progress during the five months of clinical experimentation. In the investigation, we tested the generalization of learning and the long-term maintenance of new skills via the NEPSY-II affection recognition sub-test. The results of this single case study suggest the feasibility and effectiveness of using a humanoid robot to assist with emotion recognition training in children with ASD
Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots
MenciĂłn Internacional en el tĂtulo de doctorThe unceasing development of autonomous robots in many different scenarios drives a
new revolution to improve our quality of life. Recent advances in human-robot interaction
and machine learning extend robots to social scenarios, where these systems pretend
to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in
many applications like education, healthcare, entertainment, or assistance. Complex
environments demand that social robots present adaptive mechanisms to overcome
different situations and successfully execute their tasks. Thus, considering the previous
ideas, making autonomous and appropriate decisions is essential to exhibit reasonable
behaviour and operate well in dynamic scenarios.
Decision-making systems provide artificial agents with the capacity of making
decisions about how to behave depending on input information from the environment.
In the last decades, human decision-making has served researchers as an inspiration to
endow robots with similar deliberation. Especially in social robotics, where people expect
to interact with machines with human-like capabilities, biologically inspired decisionmaking
systems have demonstrated great potential and interest. Thereby, it is expected
that these systems will continue providing a solid biological background and improve the
naturalness of the human-robot interaction, usability, and the acceptance of social robots
in the following years.
This thesis presents a decision-making system for social robots acting in healthcare,
entertainment, and assistance with autonomous behaviour. The systemâs goal is to
provide robots with natural and fluid human-robot interaction during the realisation of
their tasks. The decision-making system integrates into an already existing software
architecture with different modules that manage human-robot interaction, perception,
or expressiveness. Inside this architecture, the decision-making system decides which
behaviour the robot has to execute after evaluating information received from different
modules in the architecture. These modules provide structured data about planned
activities, perceptions, and artificial biological processes that evolve with time that are the
basis for natural behaviour. The natural behaviour of the robot comes from the evolution
of biological variables that emulate biological processes occurring in humans. We also
propose a Motivational model, a module that emulates biological processes in humans for
generating an artificial physiological and psychological state that influences the robotâs
decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited
by the robot during human-robot interactions. The robotâs decisions also depend on what
the robot perceives from the environment, planned events listed in the robotâs agenda, and
the unique features of the user interacting with the robot.
The robotâs decisions depend on many internal and external factors that influence how
the robot behaves. Users are the most critical stimuli the robot perceives since they are
the cornerstone of interaction. Social robots have to focus on assisting people in their
daily tasks, considering that each person has different features and preferences. Thus,
a robot devised for social interaction has to adapt its decisions to people that aim at
interacting with it. The first step towards adapting to different users is identifying the user
it interacts with. Then, it has to gather as much information as possible and personalise
the interaction. The information about each user has to be actively updated if necessary
since outdated information may lead the user to refuse the robot. Considering these facts,
this work tackles the user adaptation in three different ways.
⢠The robot incorporates user profiling methods to continuously gather information
from the user using direct and indirect feedback methods.
⢠The robot has a Preference Learning System that predicts and adjusts the userâs
preferences to the robotâs activities during the interaction.
⢠An Action-based Learning System grounded on Reinforcement Learning is
introduced as the origin of motivated behaviour.
The functionalities mentioned above define the inputs received by the decisionmaking
system for adapting its behaviour. Our decision-making system has been designed
for being integrated into different robotic platforms due to its flexibility and modularity.
Finally, we carried out several experiments to evaluate the architectureâs functionalities
during real human-robot interaction scenarios. In these experiments, we assessed:
⢠How to endow social robots with adaptive affective mechanisms to overcome
interaction limitations.
⢠Active user profiling using face recognition and human-robot interaction.
⢠A Preference Learning System we designed to predict and adapt the user
preferences towards the robotâs entertainment activities for adapting the interaction.
⢠A Behaviour-based Reinforcement Learning System that allows the robot to learn
the effects of its actions to behave appropriately in each situation.
⢠The biologically inspired robot behaviour using emulated biological processes and
how the robot creates social bonds with each user.
⢠The robotâs expressiveness in affect (emotion and mood) and autonomic functions
such as heart rate or blinking frequency.Programa de Doctorado en IngenierĂa ElĂŠctrica, ElectrĂłnica y AutomĂĄtica por la Universidad Carlos III de MadridPresidente: Richard J. Duro FernĂĄndez.- Secretaria: ConcepciĂłn Alicia Monje Micharet.- Vocal: Silvia Ross
The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario
Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe
Robotic Psychology. What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?
âRobotizationâ, the integration of robots in human life will change human life drastically. In many situations, such as in the service sector, robots will become an integrative part of our lives. Thus, it is vital to learn from extant research on human-robot interaction (HRI). This article introduces robotic psychology that aims to bridge the gap between humans and robots by providing insights into particularities of HRI. It presents a conceptualization of robotic psychology and provides an overview of research on service-focused human-robot interaction. Theoretical concepts, relevant to understand HRI with are reviewed. Major achievements, shortcomings, and propositions for future research will be discussed
- âŚ