46 research outputs found
The Killer Robots Are Here: Legal and Policy Implications
In little over a year, the possibility of a complete ban on autonomous weapon systemsâknown colloquially as âkiller robotsââhas evolved from a proposal in an NGO report to the subject of an international meeting with representatives from over eighty states. However, no one has yet put forward a coherent definition of autonomy in weapon systems from a law of armed conflict perspective, which often results in the conflation of legal, ethical, policy, and political arguments. This Article therefore proposes that an âautonomous weapon systemâ be defined as âa weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets.â
Applying this definition, and contrary to the nearly universal consensus, it quickly becomes apparent that autonomous weapon systems are not weapons of the future: they exist and have already been integrated into statesâ armed forces. The fact that such weaponry is currently being used with little critique has a number of profound implications. First, it undermines pro-ban arguments based on the premise that autonomous weapon systems are inherently unlawful. Second, it significantly reduces the likelihood that a complete ban would be successful, as states will be unwilling to voluntarily relinquish otherwise lawful and uniquely effective weaponry.
But law is not doomed to follow technology: if used proactively, law can channel the development and use of autonomous weapon systems. This Article concludes that intentional international regulation is needed, now, and suggests how such regulation may be designed to incorporate beneficial legal limitations and humanitarian protections
Cyborg Justice and the Risk of Technological-Legal Lock-In
Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will âcyborg justiceââhybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of bothâbe a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.One such side effect is âtechnologicalâlegal lock-in.â Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common lawâs extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy
A Meaningful Floor for Meaningful Human Control
To the extent there is any consensus among States, ban advocates, and ban skeptics regarding the regulation of autonomous weapon systems (AWS), it is grounded in the idea that all weaponry should be subject to meaningful human control. This intuitively appealing principle is immensely popular, and numerous States have explicitly declared their support for it or questioned the lawfulness of weapons that operate without such control. Lack of opposition has led some to conclude that it is either a newly developed customary norm or a preexisting, recently exposed rule of customary international law, already binding on all States.
But this broad support comes at a familiar legislative cost; there is no consensus as to what meaningful human control actually requires. State X might define meaningful human control to require informed human approval of each possible action of a given weapon system (maintaining a human being in the loop ); State Y might understand it as the ability of a human operator to oversee and veto a weapon systemâs actions (having a human being on the loop ); and State Z might view the original programming alone as providing sufficiently meaningful human control (allowing human beings to be off the loop ). As the Czech Republic noted, in voicing its belief that the decision to end somebodyâs life must remain under meaningful human control, . . . [t]he challenging part is to establish what precisely \u27meaningful human control\u27 would entail.
This paper describes attempts to clarify what factors are relevant to meaningful human control, discusses benefits associated with retaining imprecision in a standard intended to regulate new technology through international consensus, and argues that the standardâs vagueness should be limited by an interpretive floor. Meaningful human control as a regulatory concept can usefully augment existing humanitarian norms governing targetingânamely, that all attacks meet the treaty and customary international law requirements of distinction, proportionality, and feasible precautions. However, it should not be interpreted to conflict with these norms nor be prioritized in a way that undermines existing humanitarian protections. [..
Consent is Not Enough: Why States Must Respect the Intensity Threshold in Transnational Conflict
It is widely accepted that a state cannot treat a struggle with an organized non-state actor as an armed conflict until the violence crosses a minimum threshold of intensity. For instance, during the recent standoff at the Oregon wildlife refuge, the U.S. government could have lawfully used force pursuant to its domestic law enforcement and human rights obligations, but President Obama could not have ordered a drone strike on the protesters. The reason for this uncontroversial rule is simpleânot every riot or civil disturbance should be treated like a war.
But what if President Obama had invited Canada to bomb the protestorsâonce the United States consented, would all bets be off? Can an intervening state use force that would be illegal for the host state to use itself? The silence on this issue is dangerous, in no small part because these once-rare conflicts are now commonplace. States are increasingly using force against organized non-state actors outside of the statesâ own territoriesâusually, though not always, with the consent of the host state. What constrains the scope of the host stateâs consent? And can the intervening state always presume that consent is valid?
This Article argues that a host stateâs authority to consent is limited and that intervening states cannot treat consent as a blank check. Accordingly, even in consent-based interventions, the logic and foundational norms of the international legal order require both consent-giving and consent-receiving states to independently evaluate what legal regime governsâthis will often turn on whether the intensity threshold has been met. If a non-international armed conflict exists, the actions of the intervening state are governed by international humanitarian law; if not, its actions are governed instead by its own and the host stateâs human rights obligations
Implementing War Torts
Under the law of armed conflict, no entity is accountable for lawful acts in war that cause harm, and accountability mechanisms for unlawful acts (like war crimes) rarely create a right to compensation for victims. Accordingly, states now regularly create bespoke institutions, like the proposed International Claims Commission for Ukraine, to resolve mass claims associated with international crises. While helpful for specific and politically popular populations, these one-off institutions have limited jurisdiction and thus limited effect. Creating an international âwar tortsâ regimeâwhich would establish route to compensation for civilians harmed in armed conflictâwould better address this accountability gap for all wartime victims.This Article is the first attempt to map out the questions and considerations that must be navigated to construct a war torts regime. With the overarching aim of increasing the likelihood of victim compensation, it considers (1) the respective benefits of international tribunals, claims commissions, victimsâ funds, domestic courts, and hybrid systems as institutional homes; (2) appropriate claimants and defendants; and (3) the elements of a war torts claim, including the necessary level and type of harm, the preferable liability and causation standards, possible substantive and procedural affirmative defenses, and potential remedies.Domestic law has long recognized that justice often requires a tort remedy as well as criminal liability; it is past time for international law to do so as well. By describing how to begin implementing a new war torts regime to complement the law of state responsibility and international criminal law, this Article provides a blueprint for building a comprehensive accountability legal regime for all civilian harms in armed conflict
Constitutional Convergence and Customary International Law
In Getting to Rights: Treaty Ratification, Constitutional Convergence, and Human Rights Practice, Zachary Elkins, Tom Ginsburg, and Beth Simmons study the effects of post-World War II human rights texts on domestic constitutions, with a particular focus on the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR). After analyzing 680 constitutional systems compiled by the Comparative Constitutions Project to create a list of seventy-four constitutionally protected rights, the authors evaluate whether countries incorporate internationally codified human rights into their domestic constitutions, whether ratification of international agreements affects the probability of rights incorporation, and whether such incorporation increases the likelihood that countries enforce rights in practice.
After tabulating the data and running random-effects models, the authors find âa significant upward shift in the similarity to the [Universal Declaration] among constitutions written after 1948,â leading them to conclude that the Universal Declaration acted as a âtemplateâ from which constitutional drafters could select rights. They also demonstrateâafter controlling for the era and a stateâs prior constitutional traditionâthat post-1966 constitutions from states that ratified the ICCPR are more likely to include its codified rights in subsequent constitutions than non-ratifying states. Finally, relying on Freedom Houseâs civil liberties index, the authors conclude that human rights agreement ratification and constitutional incorporation is correlated with improved human rights practice on the ground. [..
The Internet of Torts: Expanding Civil Liability Standards to Address Corporate Remote Interference
Thanks to the proliferation of internet-connected devices that constitute the âInternet of Thingsâ (âIoTâ), companies can now remotely and automatically alter or deactivate household items. In addition to empowering industry at the expense of individuals, this remote interference can cause property damage and bodily injury when an otherwise operational car, alarm system, or implanted medical device abruptly ceases to function.
Even as the potential for harm escalates, contract and tort law work in tandem to shield IoT companies from liability. Exculpatory clauses limit civil remedies, IoT devicesâ bundled object/service nature thwarts implied warranty claims, and contractual notice of remote interference precludes common law tort suits. Meanwhile, absent a better understanding of how IoT-enabled injuries operate and propagate, judges are likely to apply products liability and negligence standards narrowly, in ways that curtail corporate liability.
But this is hardly the first time a new technology has altered social and power relations between industries and individuals, creating a potential liability inflection point. As before, we must decide what to incentivize and who to protect, with an awareness that the choices we make now will shape future assumptions about IoT companiesâ obligations and consumer rights. Accordingly, this Article proposes reforms to contract and tort law to expand corporate liability and minimize foreseeable consumer injury
War Torts: Accountability for Autonomous Weapons
Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimesâthe sinking of a cruise ship, the destruction of a village, the downing of a passenger jetâwithout any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law.
Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for âwar tortsâ: serious violations of international humanitarian law that give rise to state responsibility