Do Autonomous Weapon Systems (AWS) qualify as moral or rational agents? This paper argues that combatants on the battlefield are required by the demands of behavior interpretation to approach a sophisticated AWS with the “Combatant’s Stance” — the ascription of mental states required to understand the system’s strategic behavior on the battlefield. However, the fact that an AWS must be engaged with the combatant’s stance does not entail that other persons are relieved of criminal or moral responsibility for war crimes committed by autonomous weapons. This article argues that military commanders can and should be held responsible for perpetrating war crimes through an AWS regardless of the moral status of the AWS as a culpable or non-culpable agent. In other words, a military commander can be liable for the acts of the machine independent of what conclusions we draw from the fact that combatants — even artificial ones — must approach each other with the combatant’s stance. The basic framework for this liability was established at Nuremberg and subsequent tribunals — both of which focused on how a criminal defendant can be responsible for allowing a metaphorical “machine” — such as a concentration camp — to commit an international crime. The novelty in this technological development is that the law must shift from dealing with the metaphor of the “cog in the machine” to a literal machine. Nonetheless, this article also concludes that there is one area where international criminal law is ill suited to dealing with a military commander’s responsibility for unleashing an AWS that commits a war crime. Many of these cases will be based on the commander’s recklessness and unfortunately international criminal law has struggled to develop a coherent theoretical and practical program for prosecuting crimes of recklessness
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.