1,173 research outputs found

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    License to Kill: An Analysis of the Legality of Fully Autonomous Drones in the Context of International Use of Force Law

    Get PDF
    We live in a world of constant technological change; and with this change, comes unknown effects and consequences. This is even truer with weapons and warfare. Indeed, as the means and methods of warfare rapidly modify and transform, the effects and consequences on the laws of war are unknown. This Article addresses one such development in weapon and warfare technology—Fully Autonomous Weapons or “Killer Robots”—and discusses the inevitable use of these weapons within the current international law framework. Recognizing the current, inadequate legal framework, this Article proposes a regulation policy to mitigate the risks associated with Fully Autonomous Weapons. But the debate should not end here; States and the U.N. must work together to adopt a legal framework that coincides with the advancement of technology. This Article starts that discussion

    The Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (LAWS)

    Get PDF
    Lethal Autonomous Weapon Systems (LAWS) are a special class of weapons systems that, once activated, can identify and engage a target without further human intervention. Semi-autonomous weapons are currently in use today, but the transfer of the decision to kill to machines inevitably raises novel ethical, legal, and political concerns. This paper examines the current ethical debate concerning LAWS use during wartime and outlines the potential security benefits and risks associated with the development of LAWS and other autonomous artificial intelligence (AI) technology. Allowing moral considerations to play a role in the development of AI weapons systems is crucial to upholding the principles of international humanitarian law. Depending on the degree of autonomy that a weapon has, it can pose distinct advantages and disadvantages that must be considered prior to deployment of the technology in dynamic combat settings. The transformative potential of LAWS in warfare cannot be ignored

    Should we campaign against sex robots?

    Get PDF
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots

    When Robots Rule the Waves?

    Get PDF
    By virtue of the distinctive character of war at sea, a number of unique and complex ethical questions are likely to arise regarding the applications of autonomous unmanned underwater vehicles and unmanned surface vehicles

    Killer Robots - Autonomous Weapons and Their Compliance with IHL

    Get PDF
    The pursuit of weapons which distance the soldier from the actual battlefield has been going on ever since the transition from the waging of war using short blades, to the waging of war using bow and arrow. Today, that ambition has reached an almost completion with the ever-increasing number of unmanned, remote-controlled vehicles that are rapidly becoming the most common and prominent method of waging wars. Political incentives of cutting costs of warfare and sparing the lives of soldiers create the last push towards full autonomy. The emergence of increasingly autonomous weapons (AWs) has already generated a heated debate on the legality of these weapons, and two very polarized sides can be easily discerned. The purpose of this thesis is to examine and analyze this debate, to look into the arguments put forth regarding the legality or illegality of autonomous weapons, and examine where the positions are in the debate. Focus is on the three fundamental principles in International Humanitarian Law (IHL): distinction, proportionality and precaution, and I discuss the arguments in both directions. Proponents often claim the ability of AWs to comply with IHL, with the development of sensors, algorithms, software and artificial intelligence (AI), which would allow the machine to satisfactorily distinguish between civilians and combatants, carry out proportionality assessments and to take the required precautions in its actions. Opponents instead argue that the development of AI has overpromised before, that sensors could never be able to distinguish between civilians and combatants in a contemporary battlefield and that proportionality and precaution assessments require a contextual understanding that only humans are capable of. The fundamental disagreement seems to lie in the uncertainty of the development of the software and technology, and the capability of machines to perform as well, or better than, humans. The issue of accountability is also examined in terms of what happens with the responsibility for breaches of IHL when we have assigned the task of targeting and firing, essentially, the life-and-death decision, to a machine. Different propositions such as placing the accountability onto the commander, programmer, manufacturer or even the machine itself are discussed. Issues relating to the moral and ethical aspects of changing the agents of war from humans to robots are also examined, and the possible consequences this might entail – both from a separate moral perspective and as part of the legality assessment, in terms of what would happen with the applicability of IHL if we would change the agents in war. After having examined the debate on legality of AWs, some concluding remarks are drawn on what we are to do with the debate in the near future, where I present some of the more prominently discussed ways forward in terms of handling the emergence of these weapons. Finally, I end with some of my own reflections on what I have found in my analysis of the current debate, and what I believe are the more important aspects to continue discussing in the ongoing debate on the legality of autonomous weapons.Jakten pĂ„ vapen som distanserar soldaten frĂ„n sjĂ€lva slagfĂ€ltet har pĂ„gĂ„tt Ă€nda sedan övergĂ„ngen frĂ„n krigsföring med knivar till krigsföring med pil och bĂ„ge. Idag har denna ambition nĂ€rapĂ„ nĂ„tt fullstĂ€ndighet med det stĂ€ndigt vĂ€xande antal obemannade, fjĂ€rrstyrda farkoster som snabbt hĂ„ller pĂ„ att bli den vanligaste och mest framstĂ„ende metoden att föra krig. Politiska incitament sĂ„som att kapa kostnader av krig och att spara soldaters liv innebĂ€r den sista knuffen mot full autonomi. FramvĂ€xten av alltmer autonoma vapensystem har redan genererat en passionerad debatt om lagligheten av dessa vapen, och tvĂ„ vĂ€ldigt polariserade sidor Ă€r enkelt urskiljbara. Syftet med det hĂ€r arbetet Ă€r att undersöka och analysera den hĂ€r debatten, att titta pĂ„ de argument som förs fram gĂ€llande lagligheten eller olagligheten av autonoma vapen, och att undersöka var positionerna stĂ„r i debatten. Fokus ligger pĂ„ de tre grundlĂ€ggande principerna i internationell humanitĂ€rrĂ€tt (IHL): distinktion, proportionalitet och försiktighet, och jag diskuterar argumenten i bĂ„da riktningarna. FöresprĂ„karna framhĂ€ver ofta förmĂ„gan hos autonoma vapen att efterleva reglerna i IHL, genom utvecklingen av sensorer, algoritmer, mjukvara och artificiell intelligens (AI), vilket skulle göra det möjligt för maskinen att pĂ„ ett tillfredsstĂ€llande sĂ€tt skilja mellan civila och kombattanter, genomföra proportionalitets-bedömningar samt att företa nödvĂ€ndiga försiktighetsĂ„tgĂ€rder i sina aktiviteter. MotstĂ„ndarna menar istĂ€llet att utvecklingen av AI har lovat för mycket förut, att sensorer aldrig skulle kunna skilja mellan civila och kombattanter i ett nutida krigsfĂ€lt och att bedömningar av proportionalitet och försiktighetsĂ„tgĂ€rder krĂ€ver en kontextuell förstĂ„else som endast mĂ€nniskor kan klara av. Den grundlĂ€ggande meningsskiljaktigheten verkar ligga i ovetskapen om utvecklingen av mjukvara och teknologi, och förmĂ„gan hos maskinerna att utföra uppgifter lika bra som, eller bĂ€ttre Ă€n, mĂ€nniskor. FrĂ„gan om ansvar undersöks ocksĂ„ gĂ€llande vad som hĂ€nder med ansvaret för övertrĂ€delser av IHL nĂ€r vi överlĂ„ter uppgiften av att sikta och avfyra, i allt vĂ€sentligt, liv och död-beslut, till en maskin. Olika förslag om var ansvaret ska placeras, sĂ„som pĂ„ befĂ€lhavaren, programmeraren, tillverkaren eller till och med pĂ„ maskinen sjĂ€lv, diskuteras. FrĂ„gor som relaterar till de moraliska och etiska aspekterna av att byta ut agenterna i krig frĂ„n mĂ€nniskor till robotar undersöks ocksĂ„, och de möjliga konsekvenser detta innebĂ€r – bĂ„de frĂ„n ett separat moraliskt perspektiv, men ocksĂ„ som del av laglighetsbedömningen, betrĂ€ffande vad som hĂ€nder med tillĂ€mpligheten av IHL om vi byter agenterna i krig. Efter att ha undersökt debatten om laglighet av autonoma vapen drar jag nĂ„gra slutsatser om hur vi ska fortsĂ€tta debatten i den nĂ€ra förestĂ„ende framtiden, dĂ€r jag presenterar nĂ„gra av de mest diskuterade möjliga vĂ€garna framĂ„t nĂ€r det gĂ€ller att hantera framvĂ€xten av dessa vapen. Slutligen avslutar jag med nĂ„gra egna reflektioner om vad jag har kommit fram till i min analys av debatten, och vad jag tror Ă€r de viktigaste aspekterna att bĂ€ra med sig i den fortsatta debatten om lagligheten av autonoma vapen

    The viability of 'embedded Ethics' in robotic military systems without humans in the decision loop

    Get PDF
    Presentation at the "Zagreb Applied Ethics Conference 2017: The Ethics of Robotics and Artificial Intelligence". Matica hrvatska - ZAGREB, CROATIA. 5-7 JUNE, 2017.The social regulation of robotic systems with some elements of inbuilt artificial intelligence, and capable of interacting with the physical world without human control, poses challenges of extraordinary complexity. In particular, when their characteristics make them suitable for being used in military operations as autonomous devices under specific conditions. My purpose is to do a case-study research about the viability of some elements of "embedded Ethics" in different devices, with built-in sensors and a variable range of functionality, starting with Autonomous Weapons Systems (AWS). Based on the revision of recent literature and prototypes, the expected results should give a clearer perspective about the viability of 'embedded Ethics' instructions in the programming of intelligent robotic systems, including those intended for military use. As a preliminary conclusion, the heterogeneity of designs, lethal capacity and degrees of functional complexity in operational contexts –highly unpredictable-, reinforces the importance of preserving human intervention in the decision loop, when the lapse for the sequence of decisions makes it possible. [Additional references available in: http://sl.ugr.es/zaec2017]La regulaciĂłn social de los sistemas robĂłticos con elementos de inteligencia artificial incorporados, y capaces de interactuar con el mundo fĂ­sico sin control humano, plantea desafĂ­os de extraordinaria complejidad. En particular, cuando sus caracterĂ­sticas los hacen aptos para ser utilizadas en operaciones militares como dispositivos autĂłnomos bajo condiciones especĂ­ficas. Mi propĂłsito es realizar una investigaciĂłn de casos relevantes para estudiar la viabilidad de algunos elementos de "Ă©tica embebida" en diferentes dispositivos, con sensores incorporados y rango variable de funcionalidad, comenzando con los sistemas de armas autĂłnomas (AWS). Una revisiĂłn de la literatura reciente y de diversos prototipos en desarrollo podrĂ­a ofrecer una perspectiva mĂĄs clara sobre la viabilidad de instrucciones Ă©ticas incorporadas en la programaciĂłn de sistemas robĂłticos inteligentes, incluidos los destinados al uso militar. Como conclusiĂłn preliminar, la heterogeneidad de los diseños, capacidad letal y grados de complejidad funcional en contextos operativos -muy impredecibles- refuerzan la importancia de preservar la intervenciĂłn humana en el bucle de decisiĂłn, cuando el lapso de la secuencia de decisiones lo hace posible. [Additional references available in: http://sl.ugr.es/zaec2017]Supported by R+D Project [ref. FFI2016-79000-P]: "Artificial Intelligence and moral bio-enhancement. Ethical aspects" (IP: F.D. Lara). State Program for the Promotion of Scientific and Technical Research of Excellence, Subprogram of Knowledge Generation. Oct. 2016 - Sept. 2019

    How international humanitarian law will constrain the use of autonomous weapon systems in the conduct of hostilities

    Get PDF
    This thesis will assess International Humanitarian Law (IHL) Additional Protocol 1 (AP 1) compliance issues that may arise in the use of Autonomous Weapon Systems (AWSs) in the conduct of hostilities. The focus of this assessment will be on the use of AWSs to launch kinetic attacks. The basis for an assessment of AWSs will be identical to that of conventional weapons. AP 1 requires weapon systems to first be found to be in compliance with IHL weapons law before being subject to targeting law. Novel compliance issues arise due to the use of autonomy in weapon systems. Algorithmically determined autonomy utilised to ‘decide’ to launch kinetic attacks raises questions of human control of a weapon system. AP 1 creates obligations on a human's decision to use force and the resulting kinetic attack. This is altered by the use of autonomy that controls the weapon system. The focus therefore of any IHL evaluation must necessarily be on the computer that uses algorithmically determined autonomy to control AWSs. The type of algorithm that runs the weapon systems that this thesis will focus on is machine learning. Machine learning uses heuristics to provide the capability to improve AWSs performance over time. Setting the conditions for a constructive dialogue on AWSs, an AP 1 assessment on the lawfulness of AWSs will discuss both general issues; and additional issues that might arise in the use of autonomous weapon systems that improve upon their performance over time. The use of algorithmically determined autonomy in kinetic attacks raises several controversies that must be assessed for weapons law and targeting law compliance. The use of Observe, Orient, Decide, Act Loop (OODA Loop) will be used to analyse whether human decision-making is being completely removed, or merely displaced from the targeting decision-making process. The operational context of how AWSs will be used will be assessed in temporal and geographic terms to better understand how technology has led to the displacement of human decision-making in weapon systems. Ultimately, this thesis will inform the reader of the legality and use of weapon systems that were once largely electro-mechanical platforms directly controlled by humans, to weapon systems that are increasingly cyber-physical controlled by algorithms

    Robotics and Military Operations

    Get PDF
    In the wake of two extended wars, Western militaries find themselves looking to the future while confronting amorphous nonstate threats and shrinking defense budgets. The 2015 Kingston Conference on International Security (KCIS) examined how robotics and autonomous systems that enhance soldier effectiveness may offer attractive investment opportunities for developing a more efficient force capable of operating effectively in the future environment. This monograph offers 3 chapters derived from the KCIS and explores the drivers influencing strategic choices associated with these technologies and offers preliminary policy recommendations geared to advance a comprehensive technology investment strategy. In addition, the publication offers insight into the ethical challenges and potential positive moral implications of using robots on the modern battlefield.https://press.armywarcollege.edu/monographs/1398/thumbnail.jp
    • 

    corecore