25,165 research outputs found
Motivations, Values and Emotions: 3 sides of the same coin
This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development
Recommended from our members
A model of emotional influence on memory processing.
To survive in a complex environment, agents must be able to encode information about the utility value of the objects they meet. We propose a neuroscience-based model aiming to explain how a new memory is associated to an emotional response. The same theoretical framework also explains the effects of emotion on memory recall. The originality of our approach is to postulate the presence of two central processing units (CPUs): one computing only emotional information, and the other mainly concerned with cognitive processing. The emotional CPU, which is phylogenetically older, is assumed to modulate the cognitive CPU, which is more recent. The article first deals with the cognitive part of the model by highlighting the set of processes underlying memory recognition and storage. Then, building on this theoretical background, the emotional part highlights how the emotional response is computed and stored. The last section describes the interplay between the cognitive and emotional systems
Artificial consciousness and the consciousness-attention dissociation
Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systemsâthese will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness
Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory
Perception and expression of emotion are key factors to the success of
dialogue systems or conversational agents. However, this problem has not been
studied in large-scale conversation generation so far. In this paper, we
propose Emotional Chatting Machine (ECM) that can generate appropriate
responses not only in content (relevant and grammatical) but also in emotion
(emotionally consistent). To the best of our knowledge, this is the first work
that addresses the emotion factor in large-scale conversation generation. ECM
addresses the factor using three new mechanisms that respectively (1) models
the high-level abstraction of emotion expressions by embedding emotion
categories, (2) captures the change of implicit internal emotion states, and
(3) uses explicit emotion expressions with an external emotion vocabulary.
Experiments show that the proposed model can generate responses appropriate not
only in content but also in emotion.Comment: Accepted in AAAI 201
Integration of psychological models in the design of artificial creatures
Artificial creatures form an increasingly important component of interactive computer games. Examples of such creatures exist which can interact with each other and the game player and learn from their experiences. However, we argue, the design of the underlying architecture and algorithms has to a large extent overlooked knowledge from psychology and cognitive sciences. We explore the integration of observations from studies of motivational systems and emotional behaviour into the design of artificial creatures. An initial implementation of our ideas using the âsim agentâ toolkit illustrates that physiological models can be used as the basis for creatures with animal like behaviour attributes. The current aim of this research is to increase the ârealismâ of artificial creatures in interactive game-play, but it may have wider implications for the development of AI
- âŠ