41,111 research outputs found

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development

    Robot pain: a speculative review of its functions

    Get PDF
    Given the scarce bibliography dealing explicitly with robot pain, this chapter has enriched its review with related research works about robot behaviours and capacities in which pain could play a role. It is shown that all such roles ¿ranging from punishment to intrinsic motivation and planning knowledge¿ can be formulated within the unified framework of reinforcement learning.Peer ReviewedPostprint (author's final draft

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    How to Knit Your Own Markov Blanket

    Get PDF
    Hohwy (Hohwy 2016, Hohwy 2017) argues there is a tension between the free energy principle and leading depictions of mind as embodied, enactive, and extended (so-called ‘EEE1 cognition’). The tension is traced to the importance, in free energy formulations, of a conception of mind and agency that depends upon the presence of a ‘Markov blanket’ demarcating the agent from the surrounding world. In what follows I show that the Markov blanket considerations do not, in fact, lead to the kinds of tension that Hohwy depicts. On the contrary, they actively favour the EEE story. This is because the Markov property, as exemplified in biological agents, picks out neither a unique nor a stationary boundary. It is this multiplicity and mutability– rather than the absence of agent-environment boundaries as such - that EEE cognition celebrates

    Towards an artificial therapy assistant: Measuring excessive stress from speech

    Get PDF
    The measurement of (excessive) stress is still a challenging endeavor. Most tools rely on either introspection or expert opinion and are, therefore, often less reliable or a burden on the patient. An objective method could relieve these problems and, consequently, assist diagnostics. Speech was considered an excellent candidate for an objective, unobtrusive measure of emotion. True stress was successfully induced, using two storytelling\ud sessions performed by 25 patients suffering from a stress disorder. When reading either a happy or a sad story, different stress levels were reported using the Subjective Unit of Distress (SUD). A linear regression model consisting of the high-frequency energy, pitch, and zero crossings of the speech signal was able to explain 70% of the variance in the subjectively reported stress. The results demonstrate the feasibility of an objective measurement of stress in speech. As such, the foundation for an Artificial Therapeutic Agent is laid, capable of assisting therapists through an objective measurement of experienced stress

    Does modularity undermine the pro‐emotion consensus?

    Get PDF
    There is a growing consensus that emotions contribute positively to human practical rationality. While arguments that defend this position often appeal to the modularity of emotion-generation mechanisms, these arguments are also susceptible to the criticism, e.g. by Jones (2006), that emotional modularity supports pessimism about the prospects of emotions contributing positively to practical rationality here and now. This paper aims to respond to this criticism by demonstrating how models of emotion processing can accommodate the sorts of cognitive influence required to make the pro-emotion position plausible whilst exhibiting key elements of modularity
    corecore