3,517 research outputs found
Recommended from our members
Double elevation: Autonomous weapons and the search for an irreducible law of war
What should be the role of law in response to the spread of artificial intelligence in war? Fuelled by both public and private investment, military technology is accelerating towards increasingly autonomous weapons, as well as the merging of humans and machines. Contrary to much of the contemporary debate, this is not a paradigm change; it is the intensification of a central feature in the relationship between technology and war: Double elevation, above one's enemy and above oneself. Elevation above one's enemy aspires to spatial, moral, and civilizational distance. Elevation above oneself reflects a belief in rational improvement that sees humanity as the cause of inhumanity and de-humanization as our best chance for humanization. The distance of double elevation is served by the mechanization of judgement. To the extent that judgement is seen as reducible to algorithm, law becomes the handmaiden of mechanization. In response, neither a focus on questions of compatibility nor a call for a 'ban on killer robots' help in articulating a meaningful role for law. Instead, I argue that we should turn to a long-standing philosophical critique of artificial intelligence, which highlights not the threat of omniscience, but that of impoverished intelligence. Therefore, if there is to be a meaningful role for law in resisting double elevation, it should be law encompassing subjectivity, emotion and imagination, law irreducible to algorithm, a law of war that appreciates situated judgement in the wielding of violence for the collective
An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications
We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems
The AI Commander Problem : Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare
Peer reviewedPublisher PD
Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law
An international public debate over the law and ethics of autonomous weapon systems (AWS) has been underway since 2012, with those urging legal regulation of AWS under existing principles and requirements of the international law of armed conflict, on the one side, in argument with opponents who favor, instead, a preemptive international treaty ban on all such weapons, on the other. This Chapter provides an introduction to this international debate, offering the main arguments on each side. These include disputes over defining an AWS, the morality and law of automated targeting and target selection by machine, and the interaction of humans and machines in the context of lethal weapons of war. Although the Chapter concludes that a categorical ban on AWS is unjustified morally and legally – favoring the law of armed conflict’s existing case-by-case legal evaluation – it offers an exposition of arguments on each side of the AWS issue
Autonomy and Precautions in the Law of Armed Conflict
Already a controversial topic, legal debate and broader discussions concerning the amount of human control required in the employment of autonomous weapons—including autonomous cyber capabilities—continues. These discussions, particularly those taking place among States that are Parties to the 1980 Certain Conventional Weapons Convention, reveal a complete lack of consensus on the requirement of human control and serve to distract from the more important question with respect to autonomy in armed conflict: under what conditions could autonomous weapons “select” and “attack” targets in a manner that complies with the law of armed conflict (LOAC).
This article analyzes the specific LOAC rules on precautions in attack, as codified in Article 57 of Additional Protocol I, and asserts that these rules do not require human judgment in targeting decisions. Rather, these rules prescribe a particular analysis that must be completed by those who plan or decide upon an attack prior to exercising force, including decisions made by autonomous systems without meaningful human control. To the extent that autonomous weapons and weapons systems using autonomous functions can be designed and employed in such a way to comply with all required precautions, they would not violate the LOAC. A key feature of determining the ability of autonomous weapons and weapons systems using autonomous functions to meet these requirements must be a rigorous weapons review process
Focus on the Human-Machine Relations in LAWS
The report finds that the leading characteristic of human-machine interaction should be that of human control and machine dependence on humans in the execution of the targeting cycle. The control exercised by the operator must be sufficient to reflect the operator's intention for the purpose of establishing the legal accountability and ethical responsibility for all ensuing acts
Lethal Autonomous Weapons Systems: The Overlooked Importance of Administrative Accountability
The rise of lethal autonomous weapons systems creates numerous problems for legal regimes meant to insure public accountability for unlawful uses of force. In particular, international humanitarian law has long relied on enforcement through individual criminal responsibility, which is complicated by autonomous weapons that fragment responsibility for decisions to deploy violence. Accordingly, there may often be no human being with the requisite level of intent to trigger individual responsibility under existing doctrine. In response, perhaps international criminal law could be reformed to account for such issues. Or, in the alternative, greater emphasis on other forms of accountability, such as tort liability and state responsibility might be useful supplements.
But largely absent from this debate is discussion of an alternative form of accountability that often gets overlooked or dismissed as inconsequential, one that we might term “administrative accountability.” This article provides a close look at this type of accountability and its potential. Such accountability might take the form of administrative procedures, inquiries, sanctions, and reforms that can be deployed within the military or the administrative state more broadly to respond to an incident in which a violation of international humanitarian law may have occurred. These procedures might result in after-the fact sanctions on individuals who may be implicated in harms even if they would not be deemed criminally responsible or even negligent within a tort law framework. They also might dictate organizational reforms of bureaucratic structures that affect systems of hierarchical or other types of control that are forward-looking in their focus. Administrative accountability may be particularly useful in the case of autonomous systems because the restrictions of criminal law, such as the intent requirement for most crimes, may not apply in many circumstances. Administrative accountability, in contrast, is far more flexible both in the process by which it unfolds and in the remedies available, offering the prospect of both individual sanctions as well as broader organizational reforms.
Obviously, such accountability depends on the willingness of actors within the administrative bureaucracy to pursue such accountability mechanisms. And at times, criminal accountability or tort liability may be more appropriate. But at the very least the potential for such administrative accountability should be part of any discussion about accountability for uses of autonomous and semi-autonomous weaponry. Moreover, because administrative bureaucracies are not monolithic, simply the creation of administrative procedures to investigate and impose non-criminal discipline for violations of international norms can create a cadre of experts within the government who internalize these values and foster a culture of broader compliance
- …