3,359 research outputs found
âGive me a hug': the effects of touch and autonomy on people's responses to embodied social agents
Embodied social agents are programmed to display human-like social behaviour to increase intuitiveness of interacting with these agents. It is not yet clear to what extent people respond to agentsâ social behaviours. One example is touch. Despite robotsâ embodiment and increasing autonomy, the effect of communicative touch has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 betweensubject survey experiment (N=119) found that the combination of touch and proactivity influenced whether people saw the robot as machine-like and dependable. Participantsâ attitude towards robots in general also influenced perceived closeness between humans and robots. Results show that communicative touch is considered a more appropriate behaviour for proactive agents rather than reactive agents. Also, people that are generally more positive towards robots find robots that interact by touch less machine-like. These effects illustrate that careful consideration is necessary when incorporating social behaviours in agentsâ physical interaction design
âGive me a hugâ: the effects of touch and autonomy on people's responses to embodied social agents
Embodied social agents are programmed to display human-like social behaviour to increase intuitiveness of interacting with these agents. It is not yet clear to what extent people respond to agentsâ social behaviours. One example is touch. Despite robotsâ embodiment and increasing autonomy, the effect of communicative touch has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 betweensubject survey experiment (N=119) found that the combination of touch and proactivity influenced whether people saw the robot as machine-like and dependable. Participantsâ attitude towards robots in general also influenced perceived closeness between humans and robots. Results show that communicative touch is considered a more appropriate behaviour for proactive agents rather than reactive agents. Also, people that are generally more positive towards robots find robots that interact by touch less machine-like. These effects illustrate that careful consideration is necessary when incorporating social behaviours in agentsâ physical interaction design
On the Integration of Adaptive and Interactive Robotic Smart Spaces
Š 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the userâs acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree â to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving usersâ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe
Becoming in touch with industrial robots through ethnography
Touch is central to communication and social interaction. For both humans and robots touch is a mode through which they sense the world. A second wave of industrial robots is reshaping how touch operates within the labor process. Recent studies have turned their attention to the role of touch in Human-Robot Interaction (HRI). While these studies have produced useful knowledge in relation to the affective capacities of robotic touch, methods remain restrictive. This paper contributes to expanding research methods for the study of robotic touch. It reports on the design of an ongoing ethnography that forms part of the InTouch project. The interdisciplinary project takes forward a socially orientated stance and is concerned with how technologies shape the semiotic and sensory dimensions of touch in the 'real world'. We contend that these dimensions are key factors in shaping how humans and robots interact, yet are currently overlooked in the HRI community. This multi-sited sensory ethnography research has been designed to explore the social implications of robotic touch within industrial settings (e.g. manufacturing and construction)
DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self
This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users
Teaching robotâs proactive behavior using human assistance
The final publication is available at link.springer.comIn recent years, there has been a growing interest in enabling autonomous social robots to interact with people. However, many questions remain unresolved regarding the social capabilities robots should have in order to perform this interaction in an ever more natural manner. In this paper, we tackle this problem through a comprehensive study of various topics involved in the interaction between a mobile robot and untrained human volunteers for a variety of tasks. In particular, this work presents a framework that enables the robot to proactively approach people and establish friendly interaction. To this end, we provided the robot with several perception and action skills, such as that of detecting people, planning an approach and communicating the intention to initiate a conversation while expressing an emotional status.We also introduce an interactive learning system that uses the personâs volunteered assistance to incrementally improve the robotâs perception skills. As a proof of concept, we focus on the particular task of online face learning and recognition. We conducted real-life experiments with our Tibi robot to validate the framework during the interaction process. Within this study, several surveys and user studies have been realized to reveal the social acceptability of the robot within the context of different tasks.Peer ReviewedPostprint (author's final draft
Nonverbal immediacy as a characterisation of social behaviour for human-robot interaction
An increasing amount of research has started
to explore the impact of robot social behaviour on the
outcome of a goal for a human interaction partner, such
as cognitive learning gains. However, it remains unclear
from what principles the social behaviour for such robots
should be derived. Human models are often used, but
in this paper an alternative approach is proposed. First,
the concept of nonverbal immediacy from the communication
literature is introduced, with a focus on how it
can provide a characterisation of social behaviour, and
the subsequent outcomes of such behaviour. A literature
review is conducted to explore the impact on learning
of the social cues which form the nonverbal immediacy
measure. This leads to the production of a series
of guidelines for social robot behaviour. The resulting
behaviour is evaluated in a more general context, where
both children and adults judge the immediacy of humans
and robots in a similar manner, and their recall of
a short story is tested. Children recall more of the story
when the robot is more immediate, which demonstrates
an e�ffect predicted by the literature. This study provides
validation for the application of nonverbal immediacy
to child-robot interaction. It is proposed that nonverbal
immediacy measures could be used as a means of
characterising robot social behaviour for human-robot
interaction
Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots
MenciĂłn Internacional en el tĂtulo de doctorThe unceasing development of autonomous robots in many different scenarios drives a
new revolution to improve our quality of life. Recent advances in human-robot interaction
and machine learning extend robots to social scenarios, where these systems pretend
to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in
many applications like education, healthcare, entertainment, or assistance. Complex
environments demand that social robots present adaptive mechanisms to overcome
different situations and successfully execute their tasks. Thus, considering the previous
ideas, making autonomous and appropriate decisions is essential to exhibit reasonable
behaviour and operate well in dynamic scenarios.
Decision-making systems provide artificial agents with the capacity of making
decisions about how to behave depending on input information from the environment.
In the last decades, human decision-making has served researchers as an inspiration to
endow robots with similar deliberation. Especially in social robotics, where people expect
to interact with machines with human-like capabilities, biologically inspired decisionmaking
systems have demonstrated great potential and interest. Thereby, it is expected
that these systems will continue providing a solid biological background and improve the
naturalness of the human-robot interaction, usability, and the acceptance of social robots
in the following years.
This thesis presents a decision-making system for social robots acting in healthcare,
entertainment, and assistance with autonomous behaviour. The systemâs goal is to
provide robots with natural and fluid human-robot interaction during the realisation of
their tasks. The decision-making system integrates into an already existing software
architecture with different modules that manage human-robot interaction, perception,
or expressiveness. Inside this architecture, the decision-making system decides which
behaviour the robot has to execute after evaluating information received from different
modules in the architecture. These modules provide structured data about planned
activities, perceptions, and artificial biological processes that evolve with time that are the
basis for natural behaviour. The natural behaviour of the robot comes from the evolution
of biological variables that emulate biological processes occurring in humans. We also
propose a Motivational model, a module that emulates biological processes in humans for
generating an artificial physiological and psychological state that influences the robotâs
decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited
by the robot during human-robot interactions. The robotâs decisions also depend on what
the robot perceives from the environment, planned events listed in the robotâs agenda, and
the unique features of the user interacting with the robot.
The robotâs decisions depend on many internal and external factors that influence how
the robot behaves. Users are the most critical stimuli the robot perceives since they are
the cornerstone of interaction. Social robots have to focus on assisting people in their
daily tasks, considering that each person has different features and preferences. Thus,
a robot devised for social interaction has to adapt its decisions to people that aim at
interacting with it. The first step towards adapting to different users is identifying the user
it interacts with. Then, it has to gather as much information as possible and personalise
the interaction. The information about each user has to be actively updated if necessary
since outdated information may lead the user to refuse the robot. Considering these facts,
this work tackles the user adaptation in three different ways.
⢠The robot incorporates user profiling methods to continuously gather information
from the user using direct and indirect feedback methods.
⢠The robot has a Preference Learning System that predicts and adjusts the userâs
preferences to the robotâs activities during the interaction.
⢠An Action-based Learning System grounded on Reinforcement Learning is
introduced as the origin of motivated behaviour.
The functionalities mentioned above define the inputs received by the decisionmaking
system for adapting its behaviour. Our decision-making system has been designed
for being integrated into different robotic platforms due to its flexibility and modularity.
Finally, we carried out several experiments to evaluate the architectureâs functionalities
during real human-robot interaction scenarios. In these experiments, we assessed:
⢠How to endow social robots with adaptive affective mechanisms to overcome
interaction limitations.
⢠Active user profiling using face recognition and human-robot interaction.
⢠A Preference Learning System we designed to predict and adapt the user
preferences towards the robotâs entertainment activities for adapting the interaction.
⢠A Behaviour-based Reinforcement Learning System that allows the robot to learn
the effects of its actions to behave appropriately in each situation.
⢠The biologically inspired robot behaviour using emulated biological processes and
how the robot creates social bonds with each user.
⢠The robotâs expressiveness in affect (emotion and mood) and autonomic functions
such as heart rate or blinking frequency.Programa de Doctorado en IngenierĂa ElĂŠctrica, ElectrĂłnica y AutomĂĄtica por la Universidad Carlos III de MadridPresidente: Richard J. Duro FernĂĄndez.- Secretaria: ConcepciĂłn Alicia Monje Micharet.- Vocal: Silvia Ross
A motivational model based on artificial biological functions for the intelligent decision-making of social robots
Modelling the biology behind animal behaviour has attracted great interest in recent years. Nevertheless, neuroscience and artificial intelligence face the challenge of representing and emulating animal behaviour in robots. Consequently, this paper presents a biologically inspired motivational model to control the biological functions of autonomous robots that interact with and emulate human behaviour. The model is intended to produce fully autonomous, natural, and behaviour that can adapt to both familiar and unexpected situations in humanârobot interactions. The primary contribution of this paper is to present novel methods for modelling the robotâs internal state to generate deliberative and reactive behaviour, how it perceives and evaluates the stimuli from the environment, and the role of emotional responses. Our architecture emulates essential animal biological functions such as neuroendocrine responses, circadian and ultradian rhythms, motivation, and affection, to generate biologically inspired behaviour in social robots. Neuroendocrinal substances control biological functions such as sleep, wakefulness, and emotion. Deficits in these processes regulate the robotâs motivational and affective states, significantly influencing the robotâs decision-making and, therefore, its behaviour. We evaluated the model by observing the long-term behaviour of the social robot Mini while interacting with people. The experiment assessed how the robotâs behaviour varied and evolved depending on its internal variables and external situations, adapting to different conditions. The outcomes show that an autonomous robot with appropriate decision-making can cope with its internal deficits and unexpected situations, controlling its sleepâwake cycle, social behaviour, affective states, and stress, when acting in humanârobot interactions.The research leading to these results has received funding from the projects: Robots Sociales para EstimulaciĂłn FĂsica, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, InnovaciĂłn y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de InvestigaciĂłn (AEI), Spanish Ministerio de Ciencia e InnovaciĂłn. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR
The role of trust in proactive conversational assistants
Humans and machines harmoniously collaborating and bene ting from each other is a long lasting dream for researchers in robotics and arti cial intelligence. An important feature of ef cient and rewarding cooperation is the ability to assume possible problematic situations and act in advance to prevent negative outcomes. This concept of assistance is known under the term proactivity. In this article, we investigate the development and implementation of proactive dialogues for fostering a trustworthy human-computer relationship and providing adequate and timely assistance. Here, we make several contributions. A formalisation of proactive dialogue in conversational assistants is provided. The formalisation forms a framework for integrating proactive dialogue in conversational applications. Additionally, we present a study showing the relations between proactive dialogue actions and several aspects of the perceived trustworthiness of a system as well as effects on the user experience. The results of the experiments provide signi cant contributions to the line of proactive dialogue research. Particularly, we provide insights on the effects of proactive dialogue on the human-computer trust relationship and dependencies between proactive dialogue and user specific and situational characteristics
- âŚ