149,384 research outputs found

    Reasoning about Emotional Agents

    Get PDF
    In this paper we are concerned with reasoning about agents with emotions. To be more precise: we aim at a logical account of emotional agents. The very topic may already raise some eyebrows. Reasoning / rationality and emotions seem opposites, and reasoning about emotions or a logic of emotional agents seems a contradiction in terms. However, emotions and rationality are known to be more interconnected than one may suspect. There is psychological evidence that having emotions may help one to do reasoning and tasks for which rationality seems to be the only factor [1]. Moreover, work by e.g. Sloman [5] shows that one may think of designing agentbased systems where these agents show some kind of emotions, and, even more importantly, display behaviour dependent on their emotional state. It is exactly in this sense that we aim at looking at emotional agents: artificial systems that are designed in such a manner that emotions play a role. Also in psychology emotions are viewed as a structuring mechanism. Emotions are held to help human beings to choose from a myriad of possible actions in response to what happens in ou

    Adaptive Rationality in Strategic Interaction: Do Emotions Regulate Thinking about Others?

    Get PDF
    Forming beliefs or expectations about others’ behavior is fundamental to strategy, as it co-determines the outcomes of interactions in and across organizations. In the game theoretic conception of rationality, agents reason iteratively about each other to form expectations about behavior. According to prior scholarship, actual strategists fall short of this ideal, and attempts to understand the underlying cognitive processes of forming expectations about others are in their infancy. We propose that emotions help regulate iterative reasoning, that is, their tendency to not only reflect on what others think, but also on what others think about their thinking. Drawing on a controlled experiment, we find that a negative emotion (fear) deepens the tendency to engage in iterative reasoning, compared to a positive emotion (amusement). Moreover, neutral emotions yield even deeper levels of iterative reasoning. We tentatively interpret these early findings and speculate about the broader link of emotions and expectations in the context of strategic management. Extending the view of emotional regulation as a capability, emotions may be building blocks of rational heuristics for strategic interaction and enable interactive decision-making when strategists have little experience with the environment

    Adaptive Rationality in Strategic Interaction: Do Emotions Regulate Thinking about Others?

    Get PDF
    Forming beliefs or expectations about others’ behavior is fundamental to strategy, as it co-determines the outcomes of interactions in and across organizations. In the game theoretic conception of rationality, agents reason iteratively about each other to form expectations about behavior. According to prior scholarship, actual strategists fall short of this ideal, and attempts to understand the underlying cognitive processes of forming expectations about others are in their infancy. We propose that emotions help regulate iterative reasoning, that is, their tendency to not only reflect on what others think, but also on what others think about their thinking. Drawing on a controlled experiment, we find that a negative emotion (fear) deepens the tendency to engage in iterative reasoning, compared to a positive emotion (amusement). Moreover, neutral emotions yield even deeper levels of reasoning. We tentatively interpret these early findings and speculate about the broader link of emotions and expectations in the context of strategic management. Extending the view of emotional regulation as a capability, emotions may be building blocks of rational heuristics for strategic interaction and enable interactive decision-making when strategists have little experience with the environment

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Psychopathy, Agency, and Practical Reason

    Get PDF
    Philosophers have urged that considerations about the psychopath’s capacity for practical rationality can help to advance metaethical debates. These debates include the role of rational faculties in moral judgment and action, the relationship between moral judgment and moral motivation, and the capacities required for morally responsible agency. I discuss how the psychopath’s capacity for practical reason features in these debates, and I identify several takeaway lessons from the relevant literature. Specifically, I show how the insights contained therein can illuminate the complex structure of practical rationality, inform our standards for an adequate theory of practical reason, and frame our thinking about the significance of rational capacities in moral theory and social practice

    Building machines that learn and think about morality

    Get PDF
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research

    Artificial consciousness and the consciousness-attention dissociation

    Get PDF
    Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to implement sophisticated forms of human intelligence in machines. This research attempts to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable once we overcome short-term engineering challenges. We believe, however, that phenomenal consciousness cannot be implemented in machines. This becomes clear when considering emotions and examining the dissociation between consciousness and attention in humans. While we may be able to program ethical behavior based on rules and machine learning, we will never be able to reproduce emotions or empathy by programming such control systems—these will be merely simulations. Arguments in favor of this claim include considerations about evolution, the neuropsychological aspects of emotions, and the dissociation between attention and consciousness found in humans. Ultimately, we are far from achieving artificial consciousness

    A conceptual framework for interactive virtual storytelling

    Get PDF
    This paper presents a framework of an interactive storytelling system. It can integrate five components: management centre, evaluation centre, intelligent virtual agent, intelligent virtual environment, and users, making possible interactive solutions where the communication among these components is conducted in a rational and intelligent way. Environment plays an important role in providing heuristic information for agents through communicating with the management centre. The main idea is based on the principle of heuristic guiding of the behaviour of intelligent agents for guaranteeing the unexpectedness and consistent themes
    corecore