1,090 research outputs found

    Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots

    Get PDF
    Mención Internacional en el título de doctorThe unceasing development of autonomous robots in many different scenarios drives a new revolution to improve our quality of life. Recent advances in human-robot interaction and machine learning extend robots to social scenarios, where these systems pretend to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in many applications like education, healthcare, entertainment, or assistance. Complex environments demand that social robots present adaptive mechanisms to overcome different situations and successfully execute their tasks. Thus, considering the previous ideas, making autonomous and appropriate decisions is essential to exhibit reasonable behaviour and operate well in dynamic scenarios. Decision-making systems provide artificial agents with the capacity of making decisions about how to behave depending on input information from the environment. In the last decades, human decision-making has served researchers as an inspiration to endow robots with similar deliberation. Especially in social robotics, where people expect to interact with machines with human-like capabilities, biologically inspired decisionmaking systems have demonstrated great potential and interest. Thereby, it is expected that these systems will continue providing a solid biological background and improve the naturalness of the human-robot interaction, usability, and the acceptance of social robots in the following years. This thesis presents a decision-making system for social robots acting in healthcare, entertainment, and assistance with autonomous behaviour. The system’s goal is to provide robots with natural and fluid human-robot interaction during the realisation of their tasks. The decision-making system integrates into an already existing software architecture with different modules that manage human-robot interaction, perception, or expressiveness. Inside this architecture, the decision-making system decides which behaviour the robot has to execute after evaluating information received from different modules in the architecture. These modules provide structured data about planned activities, perceptions, and artificial biological processes that evolve with time that are the basis for natural behaviour. The natural behaviour of the robot comes from the evolution of biological variables that emulate biological processes occurring in humans. We also propose a Motivational model, a module that emulates biological processes in humans for generating an artificial physiological and psychological state that influences the robot’s decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited by the robot during human-robot interactions. The robot’s decisions also depend on what the robot perceives from the environment, planned events listed in the robot’s agenda, and the unique features of the user interacting with the robot. The robot’s decisions depend on many internal and external factors that influence how the robot behaves. Users are the most critical stimuli the robot perceives since they are the cornerstone of interaction. Social robots have to focus on assisting people in their daily tasks, considering that each person has different features and preferences. Thus, a robot devised for social interaction has to adapt its decisions to people that aim at interacting with it. The first step towards adapting to different users is identifying the user it interacts with. Then, it has to gather as much information as possible and personalise the interaction. The information about each user has to be actively updated if necessary since outdated information may lead the user to refuse the robot. Considering these facts, this work tackles the user adaptation in three different ways. • The robot incorporates user profiling methods to continuously gather information from the user using direct and indirect feedback methods. • The robot has a Preference Learning System that predicts and adjusts the user’s preferences to the robot’s activities during the interaction. • An Action-based Learning System grounded on Reinforcement Learning is introduced as the origin of motivated behaviour. The functionalities mentioned above define the inputs received by the decisionmaking system for adapting its behaviour. Our decision-making system has been designed for being integrated into different robotic platforms due to its flexibility and modularity. Finally, we carried out several experiments to evaluate the architecture’s functionalities during real human-robot interaction scenarios. In these experiments, we assessed: • How to endow social robots with adaptive affective mechanisms to overcome interaction limitations. • Active user profiling using face recognition and human-robot interaction. • A Preference Learning System we designed to predict and adapt the user preferences towards the robot’s entertainment activities for adapting the interaction. • A Behaviour-based Reinforcement Learning System that allows the robot to learn the effects of its actions to behave appropriately in each situation. • The biologically inspired robot behaviour using emulated biological processes and how the robot creates social bonds with each user. • The robot’s expressiveness in affect (emotion and mood) and autonomic functions such as heart rate or blinking frequency.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Richard J. Duro Fernández.- Secretaria: Concepción Alicia Monje Micharet.- Vocal: Silvia Ross

    Integration of Wavelet and Recurrence Quantification Analysis in Emotion Recognition of Bilinguals

    Get PDF
    Background: This study offers a robust framework for the classification of autonomic signals into five affective states during the picture viewing. To this end, the following emotion categories studied: five classes of the arousal-valence plane (5C), three classes of arousal (3A), and three categories of valence (3V). For the first time, the linguality information also incorporated into the recognition procedure. Precisely, the main objective of this paper was to present a fundamental approach for evaluating and classifying the emotions of monolingual and bilingual college students.Methods: Utilizing the nonlinear dynamics, the recurrence quantification measures of the wavelet coefficients extracted. To optimize the feature space, different feature selection approaches, including generalized discriminant analysis (GDA), principal component analysis (PCA), kernel PCA, and linear discriminant analysis (LDA), were examined. Finally, considering linguality information, the classification was performed using a probabilistic neural network (PNN).Results: Using LDA and the PNN, the highest recognition rates of 95.51%, 95.7%, and 95.98% were attained for the 5C, 3A, and 3V, respectively. Considering the linguality information, a further improvement of the classification rates accomplished.Conclusion: The proposed methodology can provide a valuable tool for discriminating affective states in practical applications within the area of human-computer interfaces

    Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    Get PDF
    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models

    Measuring Engagement in Robot-Assisted Autism Therapy: A Cross-Cultural Study

    Get PDF
    During occupational therapy for children with autism, it is often necessary to elicit and maintain engagement for the children to benefit from the session. Recently, social robots have been used for this; however, existing robots lack the ability to autonomously recognize the children’s level of engagement, which is necessary when choosing an optimal interaction strategy. Progress in automated engagement reading has been impeded in part due to a lack of studies on child-robot engagement in autism therapy. While it is well known that there are large individual differences in autism, little is known about how these vary across cultures. To this end, we analyzed the engagement of children (age 3–13) from two different cultural backgrounds: Asia (Japan, n = 17) and Eastern Europe (Serbia, n = 19). The children participated in a 25 min therapy session during which we studied the relationship between the children’s behavioral engagement (task-driven) and different facets of affective engagement (valence and arousal). Although our results indicate that there are statistically significant differences in engagement displays in the two groups, it is difficult to make any causal claims about these differences due to the large variation in age and behavioral severity of the children in the study. However, our exploratory analysis reveals important associations between target engagement and perceived levels of valence and arousal, indicating that these can be used as a proxy for the children’s engagement during the therapy. We provide suggestions on how this can be leveraged to optimize social robots for autism therapy, while taking into account cultural differences.MEXT Grant-in-Aid for Young Scientists B (grant no. 16763279)Chubu University Grant I (grant no. 27IS04I (Japan))European Union. HORIZON 2020 (grant agreement no. 701236 (ENGAGEME))European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (Individual Fellowship)European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (grant agreement no. 688835 (DE-ENIGMA)

    Measuring Engagement in Robot-Assisted Autism Therapy: A Cross-Cultural Study

    Get PDF
    During occupational therapy for children with autism, it is often necessary to elicit and maintain engagement for the children to benefit from the session. Recently, social robots have been used for this; however, existing robots lack the ability to autonomously recognize the children’s level of engagement, which is necessary when choosing an optimal interaction strategy. Progress in automated engagement reading has been impeded in part due to a lack of studies on child-robot engagement in autism therapy. While it is well known that there are large individual differences in autism, little is known about how these vary across cultures. To this end, we analyzed the engagement of children (age 3–13) from two different cultural backgrounds: Asia (Japan, n = 17) and Eastern Europe (Serbia, n = 19). The children participated in a 25 min therapy session during which we studied the relationship between the children’s behavioral engagement (task-driven) and different facets of affective engagement (valence and arousal). Although our results indicate that there are statistically significant differences in engagement displays in the two groups, it is difficult to make any causal claims about these differences due to the large variation in age and behavioral severity of the children in the study. However, our exploratory analysis reveals important associations between target engagement and perceived levels of valence and arousal, indicating that these can be used as a proxy for the children’s engagement during the therapy. We provide suggestions on how this can be leveraged to optimize social robots for autism therapy, while taking into account cultural differences.MEXT Grant-in-Aid for Young Scientists B (grant no. 16763279)Chubu University Grant I (grant no. 27IS04I (Japan))European Union. HORIZON 2020 (grant agreement no. 701236 (ENGAGEME))European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (Individual Fellowship)European Commission. Framework Programme for Research and Innovation. Marie Sklodowska-Curie Actions (grant agreement no. 688835 (DE-ENIGMA)

    Annotated Bibliography: Anticipation

    Get PDF

    Continuous Analysis of Affect from Voice and Face

    Get PDF
    Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems as affect and emotions are complex constructs, with fuzzy boundaries and with substantial individual differences in expression and experience [7]. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity (represented along a continuum e.g., from −1 to +1) of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face, and provide experimental results using the audiovisual Sensitive Artificial Listener (SAL) Database of natural interactions. The chapter concludes by posing a number of questions that highlight the significant issues in the field, and by extracting potential answers to these questions from the relevant literature. The chapter is organized as follows. Section 10.2 describes theories of emotion, Sect. 10.3 provides details on the affect dimensions employed in the literature as well as how emotions are perceived from visual, audio and physiological modalities. Section 10.4 summarizes how current technology has been developed, in terms of data acquisition and annotation, and automatic analysis of affect in continuous space by bringing forth a number of issues that need to be taken into account when applying a dimensional approach to emotion recognition, namely, determining the duration of emotions for automatic analysis, modeling the intensity of emotions, determining the baseline, dealing with high inter-subject expression variation, defining optimal strategies for fusion of multiple cues and modalities, and identifying appropriate machine learning techniques and evaluation measures. Section 10.5 presents our representative system that fuses vocal and facial expression cues for dimensional and continuous prediction of emotions in valence and arousal space by employing the bidirectional Long Short-Term Memory neural networks (BLSTM-NN), and introduces an output-associative fusion framework that incorporates correlations between the emotion dimensions to further improve continuous affect prediction. Section 10.6 concludes the chapter

    Investigation of Methods to Create Future Multimodal Emotional Data for Robot Interactions in Patients with Schizophrenia : A Case Study

    Get PDF
    Rapid progress in humanoid robot investigations offers possibilities for improving the competencies of people with social disorders, although this improvement of humanoid robots remains unexplored for schizophrenic people. Methods for creating future multimodal emotional data for robot interactions were studied in this case study of a 40-year-old male patient with disorganized schizophrenia without comorbidities. The qualitative data included heart rate variability (HRV), video-audio recordings, and field notes. HRV, Haar cascade classifier (HCC), and Empath API© were evaluated during conversations between the patient and robot. Two expert nurses and one psychiatrist evaluated facial expressions. The research hypothesis questioned whether HRV, HCC, and Empath API© are useful for creating future multimodal emotional data about robot–patient interactions. The HRV analysis showed persistent sympathetic dominance, matching the human–robot conversational situation. The result of HCC was in agreement with that of human observation, in the case of rough consensus. In the case of observed results disagreed upon by experts, the HCC result was also different. However, emotional assessments by experts using Empath API© were also found to be inconsistent. We believe that with further investigation, a clearer identification of methods for multimodal emotional data for robot interactions can be achieved for patients with schizophrenia

    ENGAGE-DEM: a model of engagement of people with dementia

    Get PDF
    One of the most effective ways to improve quality of life in dementia is by exposing people to meaningful activities. The study of engagement is crucial to identify which activities are significant for persons with dementia and customize them. Previous work has mainly focused on developing assessment tools and the only available model of engagement for people with dementia focused on factors influencing engagement or influenced by engagement. This paper focuses on the internal functioning of engagement and presents the development and testing of a model specifying the components of engagement, their measures, and the relationships they entertain. We collected behavioral and physiological data while participants with dementia (N=14) were involved in six sessions of play, three of game-based cognitive stimulation and three of robot-based free play. We tested the concurrent validity of the measures employed to gauge engagement and ran factorial analysis and Structural Equation Modeling to determine whether the components of engagement and their relationships were those hypothesized. The model we constructed, which we call the ENGAGE-DEM, achieved excellent goodness of fit and can be considered a scaffold to the development of affective computing frameworks for measuring engagement online and offline, especially in HCI and HRI.Postprint (author's final draft
    corecore