9 research outputs found

    Trusting Intelligent Machines

    Get PDF
    Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind?s Alpha Go Zero [1]) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace [2]. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines

    Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems

    Get PDF
    Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind?s Alpha Go Zero [1]) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace [2]. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines

    Methods, Models, and the Evolution of Moral Psychology

    Get PDF
    Why are we good? Why are we bad? Questions regarding the evolution of morality have spurred an astoundingly large interdisciplinary literature. Some significant subset of this body of work addresses questions regarding our moral psychology: how did humans evolve the psychological properties which underpin our systems of ethics and morality? Here I do three things. First, I discuss some methodological issues, and defend particularly effective methods for addressing many research questions in this area. Second, I give an in-depth example, describing how an explanation can be given for the evolution of guilt---one of the core moral emotions---using the methods advocated here. Last, I lay out which sorts of strategic scenarios generally are the ones that our moral psychology evolved to `solve', and thus which models are the most useful in further exploring this evolution

    Artificial virtuous agents in a multi‐agent tragedy of the commons

    Get PDF
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents (AMAs), it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents (AVAs) in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a technical implementation in a moral simulation based on a tragedy of the commons scenario. The experimental results show how the AVAs learn to tackle cooperation problems while exhibiting core features of their theoretical counterpart, including moral character, dispositional virtues, learning from experience, and the pursuit of eudaimonia. Ultimately, we argue that virtue ethics provides a compelling path toward morally excellent machines and that our work provides an important starting point for such endeavors

    Empathy and the Evolutionary Emergence of Guilt

    Get PDF
    Guilt poses a unique evolutionary problem. Unlike other dysphoric emotions, it is not immediately clear what its adaptive significance is. One can imagine thriving despite or even because of a lack of guilt. In this paper, we review solutions offered by Scott James, Richard Joyce, and Robert Frank and show that, although their solutions have merit, none adequately solves the puzzle. We offer an alternative solution, one that emphasizes the role of empathy and post-transgression behavior in the evolution of guilt. Our solution, we contend, offers a better account of why guilt evolved to play its distinctive social role

    Social manifestation of guilt leads to stable cooperation in multi-agent systems

    No full text
    Inspired by psychological and evolutionary studies, we present here theoretical models wherein agents have the potential to express guilt with the ambition to study the role of this emotion in the promotion of pro-social behaviour. To achieve this goal, analytical and numerical methods from evolutionary game theory are employed to identify the conditions for which enhanced cooperation emerges within the context of the iterated prisoners dilemma. Guilt is modelled explicitly as two features, i.e. A counter that keeps track of the number of transgressions and a threshold that dictates when alleviation (through for instance apology and self-punishment) is required for an emotional agent. Such an alleviation introduces an effect on the payoff of the agent experiencing guilt. We show that when the system consists of agents that resolve their guilt without considering the co-player's attitude towards guilt alleviation then cooperation does not emerge. In that case those guilt prone agents are easily dominated by agents expressing no guilt or having no incentive to alleviate the guilt they experience. When, on the other hand, the guilt prone focal agent requires that guilt only needs to be alleviated when guilt alleviation is also manifested by a defecting co-player, then cooperation may thrive. This observation remains consistent for a generalised model as is discussed in this article. In summary, our analysis provides important insights into the design of multi-agent and cognitive agent systems where the inclusion of guilt modelling can improve agents' cooperative behaviour and overall benefit.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
    corecore