95 research outputs found

    Autonomous Decision-Making based on Biological Adaptive Processes for Intelligent Social Robots

    Get PDF
    Mención Internacional en el título de doctorThe unceasing development of autonomous robots in many different scenarios drives a new revolution to improve our quality of life. Recent advances in human-robot interaction and machine learning extend robots to social scenarios, where these systems pretend to assist humans in diverse tasks. Thus, social robots are nowadays becoming real in many applications like education, healthcare, entertainment, or assistance. Complex environments demand that social robots present adaptive mechanisms to overcome different situations and successfully execute their tasks. Thus, considering the previous ideas, making autonomous and appropriate decisions is essential to exhibit reasonable behaviour and operate well in dynamic scenarios. Decision-making systems provide artificial agents with the capacity of making decisions about how to behave depending on input information from the environment. In the last decades, human decision-making has served researchers as an inspiration to endow robots with similar deliberation. Especially in social robotics, where people expect to interact with machines with human-like capabilities, biologically inspired decisionmaking systems have demonstrated great potential and interest. Thereby, it is expected that these systems will continue providing a solid biological background and improve the naturalness of the human-robot interaction, usability, and the acceptance of social robots in the following years. This thesis presents a decision-making system for social robots acting in healthcare, entertainment, and assistance with autonomous behaviour. The system’s goal is to provide robots with natural and fluid human-robot interaction during the realisation of their tasks. The decision-making system integrates into an already existing software architecture with different modules that manage human-robot interaction, perception, or expressiveness. Inside this architecture, the decision-making system decides which behaviour the robot has to execute after evaluating information received from different modules in the architecture. These modules provide structured data about planned activities, perceptions, and artificial biological processes that evolve with time that are the basis for natural behaviour. The natural behaviour of the robot comes from the evolution of biological variables that emulate biological processes occurring in humans. We also propose a Motivational model, a module that emulates biological processes in humans for generating an artificial physiological and psychological state that influences the robot’s decision-making. These processes emulate the natural biological rhythms of the human organism to produce biologically inspired decisions that improve the naturalness exhibited by the robot during human-robot interactions. The robot’s decisions also depend on what the robot perceives from the environment, planned events listed in the robot’s agenda, and the unique features of the user interacting with the robot. The robot’s decisions depend on many internal and external factors that influence how the robot behaves. Users are the most critical stimuli the robot perceives since they are the cornerstone of interaction. Social robots have to focus on assisting people in their daily tasks, considering that each person has different features and preferences. Thus, a robot devised for social interaction has to adapt its decisions to people that aim at interacting with it. The first step towards adapting to different users is identifying the user it interacts with. Then, it has to gather as much information as possible and personalise the interaction. The information about each user has to be actively updated if necessary since outdated information may lead the user to refuse the robot. Considering these facts, this work tackles the user adaptation in three different ways. • The robot incorporates user profiling methods to continuously gather information from the user using direct and indirect feedback methods. • The robot has a Preference Learning System that predicts and adjusts the user’s preferences to the robot’s activities during the interaction. • An Action-based Learning System grounded on Reinforcement Learning is introduced as the origin of motivated behaviour. The functionalities mentioned above define the inputs received by the decisionmaking system for adapting its behaviour. Our decision-making system has been designed for being integrated into different robotic platforms due to its flexibility and modularity. Finally, we carried out several experiments to evaluate the architecture’s functionalities during real human-robot interaction scenarios. In these experiments, we assessed: • How to endow social robots with adaptive affective mechanisms to overcome interaction limitations. • Active user profiling using face recognition and human-robot interaction. • A Preference Learning System we designed to predict and adapt the user preferences towards the robot’s entertainment activities for adapting the interaction. • A Behaviour-based Reinforcement Learning System that allows the robot to learn the effects of its actions to behave appropriately in each situation. • The biologically inspired robot behaviour using emulated biological processes and how the robot creates social bonds with each user. • The robot’s expressiveness in affect (emotion and mood) and autonomic functions such as heart rate or blinking frequency.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Richard J. Duro Fernández.- Secretaria: Concepción Alicia Monje Micharet.- Vocal: Silvia Ross

    Regulación por el glucosaminoglicano condroitín sulfato de la neurotransmisión sináptica en cultivos de neuronas de hipocampo

    Full text link
    Tesis doctoral inédita, leída en la Universidad Autónoma de Madrid. Facultad de Medicina. Departamento de Farmacología y Terapéutica. Fecha de lectura: 20 de junio, 201

    A motivational model based on artificial biological functions for the intelligent decision-making of social robots

    Get PDF
    Modelling the biology behind animal behaviour has attracted great interest in recent years. Nevertheless, neuroscience and artificial intelligence face the challenge of representing and emulating animal behaviour in robots. Consequently, this paper presents a biologically inspired motivational model to control the biological functions of autonomous robots that interact with and emulate human behaviour. The model is intended to produce fully autonomous, natural, and behaviour that can adapt to both familiar and unexpected situations in human–robot interactions. The primary contribution of this paper is to present novel methods for modelling the robot’s internal state to generate deliberative and reactive behaviour, how it perceives and evaluates the stimuli from the environment, and the role of emotional responses. Our architecture emulates essential animal biological functions such as neuroendocrine responses, circadian and ultradian rhythms, motivation, and affection, to generate biologically inspired behaviour in social robots. Neuroendocrinal substances control biological functions such as sleep, wakefulness, and emotion. Deficits in these processes regulate the robot’s motivational and affective states, significantly influencing the robot’s decision-making and, therefore, its behaviour. We evaluated the model by observing the long-term behaviour of the social robot Mini while interacting with people. The experiment assessed how the robot’s behaviour varied and evolved depending on its internal variables and external situations, adapting to different conditions. The outcomes show that an autonomous robot with appropriate decision-making can cope with its internal deficits and unexpected situations, controlling its sleep–wake cycle, social behaviour, affective states, and stress, when acting in human–robot interactions.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR

    Adaptive Circadian Rhythms for Autonomous and Biologically Inspired Robot Behavior

    Get PDF
    Biological rhythms are periodic internal variations of living organisms that act as adaptive responses to environmental changes. The human pacemaker is the suprachiasmatic nucleus, a brain region involved in biological functions like homeostasis or emotion. Biological rhythms are ultradian (less than 24 h), circadian (~24 h), or infradian (>24 h) depending on their period. Circadian rhythms are the most studied since they regulate daily sleep, emotion, and activity. Ambient and internal stimuli, such as light or activity, influence the timing and the period of biological rhythms, making our bodies adapt to dynamic situations. Nowadays, robots experience unceasing development, assisting us in many tasks. Due to the dynamic conditions of social environments and human-robot interaction, robots exhibiting adaptive behavior have more possibilities to engage users by emulating human social skills. This paper presents a biologically inspired model based on circadian biorhythms for autonomous and adaptive robot behavior. The model uses the Dynamic Circadian Integrated Response Characteristic method to mimic human biology and control artificial biologically inspired functions influencing the robot's decision-making. The robot's clock adapts to light, ambient noise, and user activity, synchronizing the robot's behavior to the ambient conditions. The results show the adaptive response of the model to time shifts and seasonal changes of different ambient stimuli while regulating simulated hormones that are key in sleep/activity timing, stress, and autonomic basal heartbeat control during the day

    Design of the Front End Electronics for the Infrared Camera of JEM-EUSO, and manufacturing and verification of the prototype model

    Full text link
    The Japanese Experiment Module (JEM) Extreme Universe Space Observatory (EUSO) will be launched and attached to the Japanese module of the International Space Station (ISS). Its aim is to observe UV photon tracks produced by ultra-high energy cosmic rays developing in the atmosphere and producing extensive air showers. The key element of the instrument is a very wide-field, very fast, large-lense telescope that can detect extreme energy particles with energy above 101910^{19} eV. The Atmospheric Monitoring System (AMS), comprising, among others, the Infrared Camera (IRCAM), which is the Spanish contribution, plays a fundamental role in the understanding of the atmospheric conditions in the Field of View (FoV) of the telescope. It is used to detect the temperature of clouds and to obtain the cloud coverage and cloud top altitude during the observation period of the JEM-EUSO main instrument. SENER is responsible for the preliminary design of the Front End Electronics (FEE) of the Infrared Camera, based on an uncooled microbolometer, and the manufacturing and verification of the prototype model. This paper describes the flight design drivers and key factors to achieve the target features, namely, detector biasing with electrical noise better than 100μ100 \muV from 11 Hz to 1010 MHz, temperature control of the microbolometer, from 1010^{\circ}C to 4040^{\circ}C with stability better than 1010 mK over 4.84.8 hours, low noise high bandwidth amplifier adaptation of the microbolometer output to differential input before analog to digital conversion, housekeeping generation, microbolometer control, and image accumulation for noise reduction

    Modelling Multimodal Dialogues for Social Robots Using Communicative Acts

    Get PDF
    Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot’s applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human–robot interactions.The research leading to these results has received funding from the projects Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; and Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidade

    Speeding-up Action Learning in a Social Robot with Dyna-Q+: A Bioinspired Probabilistic Model Approach

    Get PDF
    Robotic systems that are developed for social and dynamic environments require adaptive mechanisms to successfully operate. Consequently, learning from rewards has provided meaningful results in applications involving human-robot interaction. In those cases where the robot's state space and the number of actions is extensive, dimensionality becomes intractable and this drastically slows down the learning process. This effect is specially notorious in one-step temporal difference methods because just one update is performed per robot-environment interaction. In this paper, we prove how the action-based learning of a social robot can be improved by combining classical temporal difference reinforcement learning methods, such as Q-learning or Q( λ), with a probabilistic model of the environment. This architecture, which we have called Dyna, allows the robot to simultaneously act and plan using the experience obtained during real human-robot interactions. Principally, Dyna improves classical algorithms in terms of convergence speed and stability, which strengthens the learning process. Hence, in this work we have embedded a Dyna architecture in our social robot, Mini, to endow it with the ability to autonomously maintain an optimal internal state while living in a dynamic environment

    Active learning based on computer vision and human-robot interaction for the user profiling and behavior personalization of an autonomous social robot

    Get PDF
    Social robots coexist with humans in situations where they have to exhibit proper communication skills. Since users may have different features and communicative procedures, personalizing human-robot interactions is essential for the success of these interactions. This manuscript presents Active Learning based on computer vision and human-robot interaction for user recognition and profiling to personalize robot behavior. The system identifies people using Intel-face-detection-retail-004 and FaceNet for face recognition and obtains users" information through interaction. The system aims to improve human-robot interaction by (i) using online learning to allow the robot to identify the users and (ii) retrieving users' information to fill out their profiles and adapt the robot's behavior. Since user information is necessary for adapting the robot for each interaction, we hypothesized that users would consider creating their profile by interacting with the robot more entertaining and easier than taking a survey. We validated our hypothesis with three scenarios: the participants completed their profiles using an online survey, by interacting with a dull robot, or with a cheerful robot. The results show that participants gave the cheerful robot a higher usability score (82.14/100 points), and they were more entertained while creating their profiles with the cheerful robot than in the other scenarios. Statistically significant differences in the usability were found between the scenarios using the robot and the scenario that involved the online survey. Finally, we show two scenarios in which the robot interacts with a known user and an unknown user to demonstrate how it adapts to the situation.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Spain Ministry of Science, Innovation and Universities; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spain Ministry of Science and Innovation. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/5011000-11033 and by the European Union NextGenerationEU/PRTR

    Emotion and mood blending in embodied artificial agents: expressing affective states in the mini social robot

    Get PDF
    Robots that are devised for assisting and interacting with humans are becoming fundamental in many applications, including in healthcare, education, and entertainment. For these robots, the capacity to exhibit affective states plays a crucial role in creating emotional bonding with the user. In this work, we present an affective architecture that grounds biological foundations to shape the affective state of the Mini social robot in terms of mood and emotion blending. The affective state depends upon the perception of stimuli in the environment, which influence how the robot behaves and affectively communicates with other peers. According to research in neuroscience, mood typically rules our affective state in the long run, while emotions do it in the short term, although both processes can overlap. Consequently, the model that is presented in this manuscript deals with emotion and mood blending towards expressing the robot's internal state to the users. Thus, the primary novelty of our affective model is the expression of: (i) mood, (ii) punctual emotional reactions to stimuli, and (iii) the decay that mood and emotion undergo with time. The system evaluation explored whether users can correctly perceive the mood and emotions that the robot is expressing. In an online survey, users evaluated the robot's expressions showing different moods and emotions. The results reveal that users could correctly perceive the robot's mood and emotion. However, emotions were more easily recognized, probably because they are more intense affective states and mainly arise as a stimuli reaction. To conclude the manuscript, a case study shows how our model modulates Mini's expressiveness depending on its affective state during a human-robot interaction scenario.The research leading to these results has received funding from the projects Robots sociales para estimulación física, cognitiva y afectiva de mayores (ROSES) RTI2018-096338-B-I00 funded by Agencia Estatal de Investigación (AEI), Ministerio de Ciencia, Innovación y Universidades and RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by "Programas de Actividades I+D en la Comunidad de Madrid" and cofunded by Structural Funds of the EU. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature
    corecore