62,003 research outputs found

    On the legal responsibility of autonomous machines

    Get PDF

    Autonomous Weapons and Human Responsibilities

    Get PDF
    Although remote-controlled robots flying over the Middle East and Central Asia now dominate reports on new military technologies, robots that are capable of detecting, identifying, and killing enemies on their own are quietly but steadily movingfrom the theoretical to the practical. The enormous difficulty in assigning responsibilities to humans and states for the actions ofthese machines grows with their increasing autonomy. These developments implicate serious legal, ethical, and societal concerns. This Article focuses on the accountability of states and underlying human responsibilities for autonomous weapons under International Humanitarian Law or the Law of Armed Conflict. After reviewing the evolution of autonomous weapon systems and diminishing human involvement in these systems along a continuum of autonomy, this Article argues that the elusive search for individual culpability for the actions of autonomous weapons foreshadows fundamental problems in assigning responsibility to states for the actions of these machines. It further argues that the central legal requirement relevant to determining accountability (especially for violation of the most important international legal obligations protecting the civilian population in armed conflicts) is human judgment. Access to effective human judgment already appears to be emerging as the deciding factor in establishing practical restrictions and framing legal concerns with respect to the deployment of the most advanced autonomous weapons

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Machine Performance and Human Failure: How Shall We Regulate Autonomous Machines?

    Get PDF

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
    corecore