2,008 research outputs found

    Autonomous weapons systems, killer robots and human dignity

    Get PDF
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro in Int Rev Red Cross 94(886):687–709, 2012; Docherty in Shaking the foundations: the human rights implications of killer robots, Human Rights Watch, New York, 2014; Heyns in S Afr J Hum Rights 33(1):46–71, 2017; Ulgen in Human dignity in an age of autonomous weapons: are we in danger of losing an ‘elementary consideration of humanity’? 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher in Autonomous weapons systems: law, ethics, policy, Cambridge University Press, Cambridge, 2016; Pop in Autonomous weapons systems: a threat to human dignity? 2018; Saxton in (Un)dignified killer robots? The problem with the human dignity argument, 2016). This paper critically examines the relationship between human dignity and AWS. Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity

    Should we campaign against sex robots?

    Get PDF
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots

    Of Killer Robots and Dictates of Public Conscience

    Get PDF
    Law student Pedro Rogerio Borges de Carvalho addresses the inclusion of autonomous weapons systems in modern conflicts and how a decades old clause could prevent their takeover

    Autonomy in Weapons Systems. The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy

    Get PDF
    The future international security landscape will be critically impacted by the military use of artificial intelligence (AI) and robotics. With the advent of autonomous weapon systems (AWS) and a currently unfolding transformation of warfare, we have reached a turning point and are facing a number of grave new legal, ethical and political concerns. In light of this, the Task Force on Disruptive Technologies and 21st Century Warfare, deployed by the Heinrich Böll Foundation, argues that meaningful human control over weapon systems and the use of force must be retained. In their report, the task force authors offer recommendations to the German government and the German armed forces to that effect

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Lethal Autonomous Weapons and Human-in-the-Chain

    Get PDF
    Lethal autonomous weapon systems, or LAWS, are weapons that can select a target with the help of sensors and artificial intelligence and attack with little to no human intervention [1]. There are several economic, political, and social benefits to LAWS, but also there are risks and costs. Currently, there are no laws regulating these weapon systems, but most stakeholders are lobbying for a change in policy. This policy brief discusses three different potential states of policies: (1) no new policies to regulate LAWs, (2) a complete ban of LAWs, and (3) strict regulations regarding the use of LAWs. We recommend allowing the deployment of LAWS with strict regulations around their use and a mandated amount of human control

    Peter Asaro Vs. the Killer Robots

    Get PDF
    Philosopher and computer scientist Peter Asaro ’94 wants world leaders and ordinary citizens to consider the dangers of programming war machines to decide who lives or dies

    KILLER ROBOTS IN CONFLICT: The Morality of Artificial Intelligence in Warfare

    Get PDF
    In light of the fast pace of technological advancement in warfare, this paper is concerned about the moral implications of the use of artificial intelligence in the weaponry industry. Specifically, it provides an interdisciplinary perspective on the application of lethal autonomous weapon systems (LAWS) in conflict. The concepts of techno-moral implications of Swierstra (2015) and techno-moral boundaries of Kamphof (2017) are applied to the case of LAWS in warfare and provide insights into future changes of morals in war. The key results of this method suggests that LAWS in warfare threaten to erase moral virtues and cause a shift to a less humane reality of war

    Arguments for Banning Autonomous Weapon Systems: A Critique

    Get PDF
    Autonomous Weapon Systems (AWS) are the next logical advancement for military technology. There is a significant concern though that by allowing such systems on the battlefield, we are collectively abdicating our moral responsibility. In this thesis, I will examine two arguments that advocate for a total ban on the use of AWS. I call these arguments the “Responsibility” and the “Agency” arguments. After presenting these arguments, I provide my own objections and demonstrate why these arguments fail to convince. I then provide an argument as to why the use of AWS is a rational choice in the evolution of warfare. I conclude my thesis by providing a framework upon which future international regulations regarding AWS could be built
    corecore