5,577 research outputs found

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems

    The morphofunctional approach to emotion modelling in robotics

    Get PDF
    In this conceptual paper, we discuss two areas of research in robotics, robotic models of emotion and morphofunctional machines, and we explore the scope for potential cross-fertilization between them. We shift the focus in robot models of emotion from information-theoretic aspects of appraisal to the interactive significance of bodily dispositions. Typical emotional phenomena such as arousal and action readiness can be interpreted as morphofunctional processes, and their functionality may be replicated in robotic systems with morphologies that can be modulated for real-time adaptation. We investigate the control requirements for such systems, and present a possible bio-inspired architecture, based on the division of control between neural and endocrine systems in humans and animals. We suggest that emotional epi- sodes can be understood as emergent from the coordination of action control and action-readiness, respectively. This stress on morphology complements existing research on the information-theoretic aspects of emotion

    A biologically inspired architecture for an autonomous and social robot

    Get PDF
    Lately, lots of effort has been put into the construction of robots able to live among humans. This fact has favored the development of personal or social robots, which are expected to behave in a natural way. This implies that these robots could meet certain requirements, for example, to be able to decide their own actions (autonomy), to be able to make deliberative plans (reasoning), or to be able to have an emotional behavior in order to facilitate human-robot interaction. In this paper, the authors present a bioinspired control architecture for an autonomous and social robot, which tries to accomplish some of these features. In order to develop this new architecture, authors have used as a base a prior hybrid control architecture (AD) that is also biologically inspired. Nevertheless, in the later, the task to be accomplished at each moment is determined by a fix sequence processed by the Main Sequencer. Therefore, the main sequencer of the architecture coordinates the previously programmed sequence of skills that must be executed. In the new architecture, the main sequencer is substituted by a decision making system based on drives, motivations, emotions, and self-learning, which decides the proper action at every moment according to robot's state. Consequently, the robot improves its autonomy since the added decision making system will determine the goal and consequently the skills to be executed. A basic version of this new architecture has been implemented on a real robotic platform. Some experiments are shown at the end of the paper.This work has been supported by the Spanish Government through the project called “Peer to Peer Robot-Human Interaction” (R2H), of MEC (Ministry of Science and Education), the project “A new approach to social robotics” (AROS), of MICINN (Ministry of Science and Innovation), the CAM Project S2009/DPI-1559/ROBOCITY2030 II, developed by the research team RoboticsLab at the University Carlos III of Madrid

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1

    From extinction learning to anxiety treatment: mind the gap

    Get PDF
    Laboratory models of extinction learning in animals and humans have the potential to illuminate methods for improving clinical treatment of fear-based clinical disorders. However, such translational research often neglects important differences between threat responses in animals and fear learning in humans, particularly as it relates to the treatment of clinical disorders. Specifically, the conscious experience of fear and anxiety, along with the capacity to deliberately engage top-down cognitive processes to modulate that experience, involves distinct brain circuitry and is measured and manipulated using different methods than typically used in laboratory research. This paper will identify how translational research that investigates methods of enhancing extinction learning can more effectively model such elements of human fear learning, and how doing so will enhance the relevance of this research to the treatment of fear-based psychological disorders.Published versio

    Grounding Emotion Appraisal in Autonomous Humanoids

    Full text link

    Bio-inspired decision making system for an autonomous social robot: the role of fear

    Get PDF
    Robotics is an emergent field which is currently in vogue. In the near future, many researchers anticipate the spread of robots coexisting with humans in the real world. This requires a considerable level of autonomy in robots. Moreover, in order to provide a proper interaction between robots and humans without technical knowledge, these robots must behave according to the social and cultural norms. This results in social robots with cognitive capabilities inspired by biological organisms such as humans or animals. The work presented in this dissertation tries to extend the autonomy of a social robot by implementing a biologically inspired decision making system which allows the robot to make its own decisions. Considering this kind of decision making system, the robot will not be considered as a slave any more, but as a partner. The decisionmaking systemis based on drives,motivations, emotions, and self-learning. According to psychological theories, drives are deficits of internal variables or needs (e.g. energy) and the urge to correct these deficits are the motivations (e.g. survival). Following a homeostatic approach, the goal of the robot is to satisfy its drives maintaining its necessities within an acceptable range, i.e. to keep the robot’s wellbeing as high as possible. The learning process provides the robot with the proper behaviors to cope with each motivation in order to achieve the goal. In this dissertation, emotions are individually treated following a functional approach. This means that, considering some of the different functions of emotions in animals or humans, each artificial emotion plays a different role. Happiness and sadness are employed during learning as the reward or punishment respectively, so they evaluate the performance of the robot. On the other hand, fear plays a motivational role, that is, it is considered as a motivation which impels the robot to avoid dangerous situations. The benefits of these emotions in a real robot are detailed and empirically tested. The robot decides its future actions based on what it has learned from previous experiences. Although the current context of this robot is limited to a laboratory, the social robot cohabits with humans in a potentially non-deterministic environment. The robot is endowed with a repertory of actions but, initially, it does not know what action to execute either when to do it. Actually, it has to learn the policy of behavior, i.e. what action to execute in different world configuration, that is, in every state, in order to satisfy the drive related to the highest motivation. Since the robot will be learning in a real environment interacting with several objects, it is desired to achieve the policy of behavior in an acceptable range of time. The learning process is performed using a variation of the well-known Q-Learning algorithm, the Object Q-Learning. By using this algorithm, the robot learns the value of every state-action pair through its interaction with the environment. This means, it learns the value that every action has in every possible state; the higher the value, the better the action is in that state. At the beginning of the learning process these values, called the Q values, can all be set to the same value, or some of them can be fixed to another value. In the first case, this implies that the robot will learn from scratch; in the second case, the robot has some kind of previous information about the action selection. These values are updated during the learning process. The emotion of fear is particularly studied. The generation process of this emotion (the appraisal) and the reactions to fear are really useful to endow the robot with an adaptive reliable mechanism of “survival”. This dissertation presents a social robot which benefits from a particular learning process of new releasers of fear, i.e. the capacity to identify new dangerous situations. In addition, by means of the decision making system, the robot learns different reactions to prevent danger according to different unpredictable events. In fact, these reactions to fear are quite similar to the fear reactions observed in nature. Another challenge is to design a solution for the decision making system in such a way that it is flexible enough to easily change the configuration or even apply it to different robots. Considering the bio-inspiration of this work, this research (and other related works) was born as a try to better understand the brain processes. It is the author’s hope that it sheds some light in the study of mental processes, in particular those which may lead to mental or cognitive disorders. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------La robótica es un área emergente que actualmente se encuentra en boga. Muchos científicos pronostican que, en un futuro próximo, los robot cohabitarán con las personas en el mundo real. Para que esto llegue a suceder, se necesita que los robots tengan un nivel de autonomía considerable. Además, para que exista una interacción entre robots y personas sin conocimientos técnicos, estos robots deben comportarse de acuerdo a las normas sociales y culturales. Esto nos lleva a robots sociales con capacidades cognitivas inspiradas en organismos biológicos, como los humanos o los animales. El trabajo que se presenta en esta tesis pretende aumentar la autonomía de un robot social mediante la implementación de un sistema de toma de decisiones bioinspirado que permita a un robot tomar sus propias decisiones. Desde este punto de vista, el robot no se considerará más como un esclavo, sino como un compañero. El sistema de toma de decisiones está basado en necesidades (drives), motivaciones, emociones y auto-aprendizaje. De acuerdo a diversas teorías psicológicas, las necesidades son carencias o déficits de variables internas (por ejemplo, la energía) y el impulso para corregir estas necesidades son las motivaciones (como por ejemplo la supervivencia). Considerando un enfoque homeostático, el objetivo del robot es satisfacer sus carencias manteniéndolas en un nivel aceptable. Esto quiere decir que el bienestar del robot debe ser lo más alto posible. El proceso de aprendizaje permite al robot desarrollar el comportamiento necesario según las distintas motivaciones para lograr su objetivo. En esta tesis, las emociones son consideradas de forma individual desde un punto de vista funcional. Esto significa que, considerando las diferentes funciones de las emociones en animales y humanos, cada una de las emociones artificiales juega un papel diferente. Por un lado, la felicidad y la tristeza se usan durante el aprendizaje como refuerzo o castigo respectivamente y, por tanto, evaluan el comportamiento del robot. Por otro lado, el miedo juega un papel motivacional, es decir, es considerado como una motivación la cual “empuja” el robot a evitar las situaciones peligrosas. Los detalles y las ventajas de estas emociones en un robot real se muestran empíricamente a lo largo de este libro. El robot decide sus acciones futuras en base a lo que ha aprendido en experiencias pasadas. A pesar de que el contexto actual del robot está limitado a un laboratorio, el robot social cohabita con personas en un entorno potencialmente no-determinístico. El robot está equipado con un repertorio de acciones pero, inicialmente, no sabe qué acción ejecutar ni cuando hacerlo. De echo, tiene que aprender la política de comportamiento, esto es, qué acción ejecutar en diferentes configuraciones del mundo (en cada estado) para satisfacer la necesidad relacionada con la motivación más alta. Puesto que el robot aprende en un entorno real interaccionando con distintos objetos, es necesario que este aprendizaje se realice en un tiempo aceptable. El algoritmo de aprendizaje que se utiliza es una variación del conocido Q-Learning, el Object Q-Learning. Mediante este algoritmo el robot aprende el valor de cada par estadoacción a través de interacción con el entorno. Esto significa, que aprende el valor de cada acción in cada posible estado. Cuanto más alto sea el valor, mejor es la acción en ese estado. Al inicio del proceso de aprendizaje, estos valores, llamados valores Q, pueden tener todos el mismo valor o pueden pueden tener asignados distintos valores. En el primer caso, el robot no dispone de conocimientos previos; en el segundo, el robot dispone de cierta información sobre la acción a elegir. Estos valores serán actualizados durante el aprendizaje. La emoción de miedo es especialmente estudiada en esta tesis. La forma de generarse esta emoción (el appraisal) y las reacciones al miedo resultan realmente útiles a la hora de dotar al robot con un mecanismo de supervivencia adaptable y fiable. Esta tesis presenta un robot social que utiliza un proceso particular para el aprendizaje de nuevos “liberadores” del miedo, es decir, dispone de la capacidad de identificar nuevas situaciones peligrosas. Además, mediante el sistema de toma de decisiones, el robot aprende diferente reacciones para protegerse ante posibles daños causados por diversos eventos impredecibles. De echo, estas reacciones al miedo son bastante similares a las reacciones al miedo que se pueden observar en la naturaleza. Otro reto importante es el diseño de la solución: el sistema de toma de decisiones tiene que diseñarse de forma que sea suficientemente flexible para permitir cambiar fácilmente la configuración o incluso para aplicarse a distintos robots. Teniendo en cuenta el enfoque bioinspirado de este trabajo, esta investigación (y muchos otros trabajos relacionados) surge como un intento de entender un poco más lo que sucede en el cerebro. El autor espera que esta tesis pueda ayudar en el estudio de los procesos mentales, en particular aquellos que pueden llevar a desórdenes mentales o cognitivos

    Annotated Bibliography: Anticipation

    Get PDF

    Normative Emotional Agents: a viewpoint paper

    Get PDF
    [EN] Human social relationships imply conforming to the norms, behaviors and cultural values of the society, but also socialization of emotions, to learn how to interpret and show them. In multiagent systems, much progress has been made in the analysis and interpretation of both emotions and norms. Nonetheless, the relationship between emotions and norms has hardly been considered and most normative agents do not consider emotions, or vice-versa. In this article, we provide an overview of relevant aspects within the area of normative agents and emotional agents. First we focus on the concept of norm, the different types of norms, its life cycle and a review of multiagent normative systems. Secondly, we present the most relevant theories of emotions, the life cycle of an agent¿s emotions, and how emotions have been included through computational models in multiagent systems. Next, we present an analysis of proposals that integrate emotions and norms in multiagent systems. From this analysis, four relationships are detected between norms and emotions, which we analyze in detail and discuss how these relationships have been tackled in the reviewed proposals. Finally, we present a proposal for an abstract architecture of a Normative Emotional Agent that covers these four norm-emotion relationships.This work was supported by the Spanish Government project TIN2017-89156- R, the Generalitat Valenciana project PROMETEO/2018/002 and the Spanish Goverment PhD Grant PRE2018-084940.Argente, E.; Del Val, E.; Pérez-García, D.; Botti Navarro, VJ. (2022). Normative Emotional Agents: a viewpoint paper. IEEE Transactions on Affective Computing. 13(3):1254-1273. https://doi.org/10.1109/TAFFC.2020.3028512S1254127313
    corecore