51,884 research outputs found

    Artificial morality: Making of the artificial moral agents

    Get PDF
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and court algorithms. It is therefore important to create reliable and responsible machines based on the same ethical principles that society demands from people. New challenges in creating such agents appear. There are philosophical questions about a machine’s potential to be an agent, or mora l agent, in the first place. Then comes the problem of social acceptance of such machines, regardless of their theoretic agency status. As a result of efforts to resolve this problem, there are insinuations of needed additional psychological (emotional and cogn itive) competence in cold moral machines. What makes this endeavour of developing AMAs even harder is the complexity of the technical, engineering aspect of their creation. Implementation approaches such as top- down, bottom-up and hybrid approach aim to find the best way of developing fully moral agents, but they encounter their own problems throughout this effort

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Enabling Robots to Communicate their Objectives

    Full text link
    The overarching goal of this work is to efficiently enable end-users to correctly anticipate a robot's behavior in novel situations. Since a robot's behavior is often a direct result of its underlying objective function, our insight is that end-users need to have an accurate mental model of this objective function in order to understand and predict what the robot will do. While people naturally develop such a mental model over time through observing the robot act, this familiarization process may be lengthy. Our approach reduces this time by having the robot model how people infer objectives from observed behavior, and then it selects those behaviors that are maximally informative. The problem of computing a posterior over objectives from observed behavior is known as Inverse Reinforcement Learning (IRL), and has been applied to robots learning human objectives. We consider the problem where the roles of human and robot are swapped. Our main contribution is to recognize that unlike robots, humans will not be exact in their IRL inference. We thus introduce two factors to define candidate approximate-inference models for human learning in this setting, and analyze them in a user study in the autonomous driving domain. We show that certain approximate-inference models lead to the robot generating example behaviors that better enable users to anticipate what it will do in novel situations. Our results also suggest, however, that additional research is needed in modeling how humans extrapolate from examples of robot behavior.Comment: RSS 201

    Towards a Relational Understanding of the Performance Ecosystem

    Get PDF
    This article seeks to form a deeper understanding of the performance ecosystem by drawing parallels with Bourriaud's Relational Aesthetics and Guattari's conception of subjectivity as outlined in Chaosmosis. Through an examination of participation within performance, and a recognition of the mutability of the roles of performer, listener, instrument and environment in the creation of the music event, this article examines the place of subjectivity, the capacity for self-creation, in the formation of a group aesthetic. Such a concept places the creation of meaning not within the individual participant but rather within the relationship between participants in a situation, a relationship that recognises the interaction between individuals, societies and institutions in its production. Such a discussion helps further our understanding of the performance ecosystem as a conceptual tool

    A macroscopic analytical model of collaboration in distributed robotic systems

    Get PDF
    In this article, we present a macroscopic analytical model of collaboration in a group of reactive robots. The model consists of a series of coupled differential equations that describe the dynamics of group behavior. After presenting the general model, we analyze in detail a case study of collaboration, the stick-pulling experiment, studied experimentally and in simulation by Ijspeert et al. [Autonomous Robots, 11, 149-171]. The robots' task is to pull sticks out of their holes, and it can be successfully achieved only through the collaboration of two robots. There is no explicit communication or coordination between the robots. Unlike microscopic simulations (sensor-based or using a probabilistic numerical model), in which computational time scales with the robot group size, the macroscopic model is computationally efficient, because its solutions are independent of robot group size. Analysis reproduces several qualitative conclusions of Ijspeert et al.: namely, the different dynamical regimes for different values of the ratio of robots to sticks, the existence of optimal control parameters that maximize system performance as a function of group size, and the transition from superlinear to sublinear performance as the number of robots is increased

    Motivations, Values and Emotions: 3 sides of the same coin

    Get PDF
    This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development
    • …
    corecore