925 research outputs found

    Autonomous weapons systems, killer robots and human dignity

    Get PDF
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro in Int Rev Red Cross 94(886):687–709, 2012; Docherty in Shaking the foundations: the human rights implications of killer robots, Human Rights Watch, New York, 2014; Heyns in S Afr J Hum Rights 33(1):46–71, 2017; Ulgen in Human dignity in an age of autonomous weapons: are we in danger of losing an ‘elementary consideration of humanity’? 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher in Autonomous weapons systems: law, ethics, policy, Cambridge University Press, Cambridge, 2016; Pop in Autonomous weapons systems: a threat to human dignity? 2018; Saxton in (Un)dignified killer robots? The problem with the human dignity argument, 2016). This paper critically examines the relationship between human dignity and AWS. Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity

    Of Killer Robots and Dictates of Public Conscience

    Get PDF
    Law student Pedro Rogerio Borges de Carvalho addresses the inclusion of autonomous weapons systems in modern conflicts and how a decades old clause could prevent their takeover

    Autonomy in Weapons Systems. The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy

    Get PDF
    The future international security landscape will be critically impacted by the military use of artificial intelligence (AI) and robotics. With the advent of autonomous weapon systems (AWS) and a currently unfolding transformation of warfare, we have reached a turning point and are facing a number of grave new legal, ethical and political concerns. In light of this, the Task Force on Disruptive Technologies and 21st Century Warfare, deployed by the Heinrich Böll Foundation, argues that meaningful human control over weapon systems and the use of force must be retained. In their report, the task force authors offer recommendations to the German government and the German armed forces to that effect

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Lethal Autonomous Weapons and Human-in-the-Chain

    Get PDF
    Lethal autonomous weapon systems, or LAWS, are weapons that can select a target with the help of sensors and artificial intelligence and attack with little to no human intervention [1]. There are several economic, political, and social benefits to LAWS, but also there are risks and costs. Currently, there are no laws regulating these weapon systems, but most stakeholders are lobbying for a change in policy. This policy brief discusses three different potential states of policies: (1) no new policies to regulate LAWs, (2) a complete ban of LAWs, and (3) strict regulations regarding the use of LAWs. We recommend allowing the deployment of LAWS with strict regulations around their use and a mandated amount of human control

    The Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (LAWS)

    Get PDF
    Lethal Autonomous Weapon Systems (LAWS) are a special class of weapons systems that, once activated, can identify and engage a target without further human intervention. Semi-autonomous weapons are currently in use today, but the transfer of the decision to kill to machines inevitably raises novel ethical, legal, and political concerns. This paper examines the current ethical debate concerning LAWS use during wartime and outlines the potential security benefits and risks associated with the development of LAWS and other autonomous artificial intelligence (AI) technology. Allowing moral considerations to play a role in the development of AI weapons systems is crucial to upholding the principles of international humanitarian law. Depending on the degree of autonomy that a weapon has, it can pose distinct advantages and disadvantages that must be considered prior to deployment of the technology in dynamic combat settings. The transformative potential of LAWS in warfare cannot be ignored

    Arguments for Banning Autonomous Weapon Systems: A Critique

    Get PDF
    Autonomous Weapon Systems (AWS) are the next logical advancement for military technology. There is a significant concern though that by allowing such systems on the battlefield, we are collectively abdicating our moral responsibility. In this thesis, I will examine two arguments that advocate for a total ban on the use of AWS. I call these arguments the “Responsibility” and the “Agency” arguments. After presenting these arguments, I provide my own objections and demonstrate why these arguments fail to convince. I then provide an argument as to why the use of AWS is a rational choice in the evolution of warfare. I conclude my thesis by providing a framework upon which future international regulations regarding AWS could be built

    Toward a normative model of Meaningful Human Control over weapons systems

    Get PDF
    The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems
    • 

    corecore