120 research outputs found

    Review of Anthropomorphic Head Stabilisation and Verticality Estimation in Robots

    Get PDF
    International audienceIn many walking, running, flying, and swimming animals, including mammals, reptiles, and birds, the vestibular system plays a central role for verticality estimation and is often associated with a head sta-bilisation (in rotation) behaviour. Head stabilisation, in turn, subserves gaze stabilisation, postural control, visual-vestibular information fusion and spatial awareness via the active establishment of a quasi-inertial frame of reference. Head stabilisation helps animals to cope with the computational consequences of angular movements that complicate the reliable estimation of the vertical direction. We suggest that this strategy could also benefit free-moving robotic systems, such as locomoting humanoid robots, which are typically equipped with inertial measurements units. Free-moving robotic systems could gain the full benefits of inertial measurements if the measurement units are placed on independently orientable platforms, such as a human-like heads. We illustrate these benefits by analysing recent humanoid robots design and control approaches

    Humanoid-based protocols to study social cognition

    Get PDF
    Social cognition is broadly defined as the way humans understand and process their interactions with other humans. In recent years, humans have become more and more used to interact with non-human agents, such as technological artifacts. Although these interactions have been restricted to human-controlled artifacts, they will soon include interactions with embodied and autonomous mechanical agents, i.e., robots. This challenge has motivated an area of research related to the investigation of human reactions towards robots, widely referred to as Human-Robot Interaction (HRI). Classical HRI protocols often rely on explicit measures, e.g., subjective reports. Therefore, they cannot address the quantification of the crucial implicit social cognitive processes that are evoked during an interaction. This thesis aims to develop a link between cognitive neuroscience and human-robot interaction (HRI) to study social cognition. This approach overcomes methodological constraints of both fields, allowing to trigger and capture the mechanisms of real-life social interactions while ensuring high experimental control. The present PhD work demonstrates this through the systematic study of the effect of online eye contact on gaze-mediated orienting of attention. The study presented in Publication I aims to adapt the gaze-cueing paradigm from cognitive science to an objective neuroscientific HRI protocol. Furthermore, it investigates whether the gaze-mediated orienting of attention is sensitive to the establishment of eye contact. The study replicates classic screen-based findings of attentional orienting mediated by gaze both at behavioral and neural levels, highlighting the feasibility and the scientific value of adding neuroscientific methods to HRI protocols. The aim of the study presented in Publication II is to examine whether and how real-time eye contact affects the dual-component model of joint attention orienting. To this end, cue validity and stimulus-to-onset asynchrony are also manipulated. The results show an interactive effect of strategic (cue validity) and social (eye contact) top-down components on the botton-up reflexive component of gaze-mediated orienting of attention. The study presented in Publication III aims to examine the subjective engagement and attribution of human likeness towards the robot depending on established eye contact or not during a joint attention task. Subjective reports show that eye contact increases human likeness attribution and feelings of engagement with the robot compared to a no-eye contact condition. The aim of the study presented in Publication IV is to investigate whether eye contact established by a humanoid robot affects objective measures of engagement (i.e. joint attention and fixation durations), and subjective feelings of engagement with the robot during a joint attention task. Results show that eye contact modulates attentional engagement, with longer fixations at the robot’s face and cueing effect when the robot establishes eye contact. In contrast, subjective reports show that the feeling of being engaged with the robot in an HRI protocol is not modulated by real-time eye contact. This study further supports the necessity for adding objective methods to HRI. Overall, this PhD work shows that embodied artificial agents can advance the theoretical knowledge of social cognitive mechanisms by serving as sophisticated interactive stimuli of high ecological validity and excellent experimental control. Moreover, humanoid-based protocols grounded in cognitive science can advance the HRI community by informing about the exact cognitive mechanisms that are present during HRI

    Shared Perception in Human-Robot Interaction

    Get PDF
    Interaction can be seen as a composition of perspectives: the integration of perceptions, intentions, and actions on the environment two or more agents share. For an interaction to be effective, each agent must be prone to “sharedness”: being situated in a common environment, able to read what others express about their perspective, and ready to adjust one’s own perspective accordingly. In this sense, effective interaction is supported by perceiving the environment jointly with others, a capability that in this research is called Shared Perception. Nonetheless, perception is a complex process that brings the observer receiving sensory inputs from the external world and interpreting them based on its own, previous experiences, predictions, and intentions. In addition, social interaction itself contributes to shaping what is perceived: others’ attention, perspective, actions, and internal states may also be incorporated into perception. Thus, Shared perception reflects the observer's ability to integrate these three sources of information: the environment, the self, and other agents. If Shared Perception is essential among humans, it is equally crucial for interaction with robots, which need social and cognitive abilities to interact with humans naturally and successfully. This research deals with Shared Perception within the context of Social Human-Robot Interaction (HRI) and involves an interdisciplinary approach. The two general axes of the thesis are the investigation of human perception while interacting with robots and the modeling of robot’s perception while interacting with humans. Such two directions are outlined through three specific Research Objectives, whose achievements represent the contribution of this work. i) The formulation of a theoretical framework of Shared Perception in HRI valid for interpreting and developing different socio-perceptual mechanisms and abilities. ii) The investigation of Shared Perception in humans focusing on the perceptual mechanism of Context Dependency, and therefore exploring how social interaction affects the use of previous experience in human spatial perception. iii) The implementation of a deep-learning model for Addressee Estimation to foster robots’ socio-perceptual skills through the awareness of others’ behavior, as suggested in the Shared Perception framework. To achieve the first Research Objective, several human socio-perceptual mechanisms are presented and interpreted in a unified account. This exposition parallels mechanisms elicited by interaction with humans and humanoid robots and aims to build a framework valid to investigate human perception in the context of HRI. Based on the thought of D. Davidson and conceived as the integration of information coming from the environment, the self, and other agents, the idea of "triangulation" expresses the critical dynamics of Shared Perception. Also, it is proposed as the functional structure to support the implementation of socio-perceptual skills in robots. This general framework serves as a reference to fulfill the other two Research Objectives, which explore specific aspects of Shared Perception. For what concerns the second Research Objective, the human perceptual mechanism of Context Dependency is investigated, for the first time, within social interaction. Human perception is based on unconscious inference, where sensory inputs integrate with prior information. This phenomenon helps in facing the uncertainty of the external world with predictions built upon previous experience. To investigate the effect of social interaction on such a mechanism, the iCub robot has been used as an experimental tool to create an interactive scenario with a controlled setting. A user study based on psychophysical methods, Bayesian modeling, and a neural network analysis of human results demonstrated that social interaction influenced Context Dependency so that when interacting with a social agent, humans rely less on their internal models and more on external stimuli. Such results are framed in Shared Perception and contribute to revealing the integration dynamics of the three sources of Shared Perception. The others’ presence and social behavior (other agents) affect the balance between sensory inputs (environment) and personal history (self) in favor of the information shared with others, that is, the environment. The third Research Objective consists of tackling the Addressee Estimation problem, i.e., understanding to whom a speaker is talking, to improve the iCub social behavior in multi-party interactions. Addressee Estimation can be considered a Shared Perception ability because it is achieved by using sensory information from the environment, internal representations of the agents’ position, and, more importantly, the understanding of others’ behavior. An architecture for Addressee Estimation is thus designed considering the integration process of Shared Perception (environment, self, other agents) and partially implemented with respect to the third element: the awareness of others’ behavior. To achieve this, a hybrid deep-learning (CNN+LSTM) model is developed to estimate the speaker-robot relative placement of the addressee based on the non-verbal behavior of the speaker. Addressee Estimation abilities based on Shared Perception dynamics are aimed at improving multi-party HRI. Making robots aware of other agents’ behavior towards the environment is the first crucial step for incorporating such information into the robot’s perception and modeling Shared Perception

    Socially Believable Robots

    Get PDF
    Long-term companionship, emotional attachment and realistic interaction with robots have always been the ultimate sign of technological advancement projected by sci-fi literature and entertainment industry. With the advent of artificial intelligence, we have indeed stepped into an era of socially believable robots or humanoids. Affective computing has enabled the deployment of emotional or social robots to a certain level in social settings like informatics, customer services and health care. Nevertheless, social believability of a robot is communicated through its physical embodiment and natural expressiveness. With each passing year, innovations in chemical and mechanical engineering have facilitated life-like embodiments of robotics; however, still much work is required for developing a “social intelligence” in a robot in order to maintain the illusion of dealing with a real human being. This chapter is a collection of research studies on the modeling of complex autonomous systems. It will further shed light on how different social settings require different levels of social intelligence and what are the implications of integrating a socially and emotionally believable machine in a society driven by behaviors and actions

    SMA-driven soft robotic neck: design, control and validation

    Get PDF
    Replicating the behavior and movement of living organisms to develop robots which are better adapted to the human natural environment is a major area of interest today. Soft device development is one of the most promising and innovative technological fields to meet this challenge. However, soft technology lacks of suitable actuators, and therefore, development and integration of soft actuators is a priority. This article presents the development and control of a soft robotic neck which is actuated by a flexible Shape Memory Alloy (SMA)-based actuator. The proposed neck has two degrees of freedom that allow movements of inclination and orientation, thus approaching the actual movement of the human neck. The platform we have developed may be considered a real soft robotic device since, due to its flexible SMA-based actuator, it has much fewer rigid parts compared to similar platforms. Weight and motion noise have also been considerably reduced due to the lack of gear boxes, housing and bearings, which are commonly used in conventional actuators to reduce velocity and increase torque.This work was supported in part by the Spanish Ministry of Economy and Competitiveness through the Exoesqueleto para Diagnostico y Asistencia en Tareas de ManipulaciĂłn Spanish Research Project under Grant DPI2016-75346-R and the HUMASOFT Project under Grant DPI2016-75330-P, in part by the Programas de Actividades I+D en la Comunidad de Madrid through the RoboCity2030-DIH-CM Madrid Robotics Digital Innovation Hub (RobĂłtica aplicada a la mejora de la calidad de vida de los ciudadanos, fase IV) under Grant S2018/NMT-4331, and in part by the Structural Funds of the EU

    Idiothetic Verticality Estimation Through Head Stabilization Strategy

    Get PDF
    International audienceThe knowledge of the gravitational vertical is fundamental for the autonomous control of humanoids and other free-moving robotic systems such as rovers and drones. This article deals with the hypothesis that the so-called 'head stabilization strategy' observed in humans and animals facilitates the estimation of the true vertical from inertial sensing only. This problem is difficult because inertial measurements respond to a combination of gravity and fictitious forces that are hard to disentangle. From simulations and experiments, we found that the angular stabilization of a platform bearing inertial sensors enables the application of the separation principle. This principle, which permits one to design estimators and controllers independently from each other, typically applies to linear systems, but rarely to nonlinear systems. We found empirically that, given inertial measurements, the angular regulation of a platform results in a system that is stable and robust and which provides true vertical estimates as a byproduct of the feedback. We conclude that angularly stabilized inertial measurement platforms could liberate robots from ground-based measurements for postural control, locomotion, and other functions, leading to a true idiothetic sensing modality, that is, not based on any external reference but the gravity field

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Intelligent Management of Hierarchical Behaviors Using a NAO Robot as a Vocational Tutor

    Get PDF
    In order to create an intelligent system which can hold an interview using the NAO robot as an interviewer playing the role of a vocational tutor were classified and categorized twenty behaviors within five personality profiles. Five basic emotions are considered: Anger, boredom, interest, surprise and joy. Selected behaviors are grouped according to these five different emotions. Common behaviors (e.g., movements or body postures) used by the robot (who assumes the role of vocational tutor) during vocational guidance sessions will be based on a theory of personality traits called the "Five Factor Model". In this context, a predefined set of questions will be asked by the robot according to a theoretical model called "Orientation Model" about the person's vocational preferences. Therefore, NAO can react as conveniently as possible during the interview according to the score of the answer given by the person to the question posed and its personality type. Additionally, based on the answers to these questions, it is established a vocational profile and the robot can to emit a recommendation about person vocation. The results obtained show how the intelligent selection of behaviors can be successfully achieved through the proposed approach, making the interaction between a human and a robot friendlier

    Designing companions, designing tools : social robots, developers, and the elderly in Japan

    Full text link
    Ce mĂ©moire de maĂźtrise trace la gĂ©nĂ©alogie d’un robot social, de sa conception Ă  ses diffĂ©rentes utilisations et la maniĂšre dont les utilisateurs interagissent avec. A partir d’un terrain de six mois dans une start-up et deux maisons de retraite au Japon, j’interroge la crĂ©ation de Pepper, un robot social crĂ©e par la compagnie japonais SoftBank. Pepper a Ă©tĂ© crĂ©Ă© de façon Ă  ĂȘtre humanoĂŻde mais pas trop, ainsi que perçu comme adorable et charmant. Par la suite, je dĂ©cris comment Pepper et d’autres robots sociaux sont utilisĂ©s, Ă  la fois par des dĂ©veloppeurs, mais aussi par des personnes ĂągĂ©es, et je souligne une tension existante entre leur utilisation comme des compagnons et des outils. En me basant sur l’anthropologie ontologique et la phĂ©nomĂ©nologie, j’examine la construction du robot comme une entitĂ© avec laquelle il est possible d’interagir, notamment Ă  cause de sa conception en tant qu’acteur social, ontologiquement ambigu, et qui peut exprimer de l’affect. En m’intĂ©ressant aux interactions multimodales, et en particulier le toucher, je classifie trois fonctions remplies par l’interaction : dĂ©couverte, contrĂŽle, et l’expression de l’affect. Par la suite, je questionne ces actes d’agir vers et s’ils peuvent ĂȘtre compris comme une interaction, puisqu’ils n’impliquent pas que le robot soit engagĂ©. J’argumente qu’une interaction est un Ă©change de sens entre des agents engagĂ©s et incarnĂ©s. Il y a effectivement parfois un Ă©change de sens entre le robot et son utilisateur, et le robot est un artefact incarnĂ©. Cependant, seule l’impression d’intersubjectivitĂ© est nĂ©cessaire Ă  l’interaction, plutĂŽt que sa rĂ©elle prĂ©sence.This master’s thesis traces a genealogy of a social robot through its conception to its various uses and the ways users interact with it. Drawing on six months of fieldwork in a start-up and two nursing homes in Japan, I first investigate the genesis of a social robot created by SoftBank, a Japanese multinational telecommunications company. This social robot is quite humanlike, made to be cute and have an adorable personality. While developers constitute one of the user populations, this robot, along with several others, is also used by elderly residents in nursing homes. By analyzing the uses of these populations, I underline the tension between the social robot as a companion and a tool. Drawing on ontological anthropology and phenomenology I look at how the robot is constructed as an entity that can be interacted with, through its conception as an ontologically ambiguous, social actor, that can express affect. Looking at multimodal interaction, and especially touch, I then classify three functions they fulfill: discovery, control, and the expression of affect, before questioning whether this acting towards the robot that does not imply acting from the robot, can be considered a form of interaction. I argue that interaction is the exchange of meaning between embodied, engaged participants. Meaning can be exchanged between robots and humans and the robot can be seen as embodied, but only the appearance of intersubjectivity is enough, rather than its actual presence
    • 

    corecore