175 research outputs found
Recommended from our members
Autonomous weapon systems and international humanitarian law: a reply to the critics
In November 2012, Human Rights Watch, in collaboration with the International Human Rights Clinic at Harvard Law School, released Losing Humanity: The Case against Killer Robots.[2] Human Rights Watch is among the most sophisticated of human rights organizations working in the field of international humanitarian law. Its reports are deservedly influential and have often helped shape application of the law during armed conflict. Although this author and the organization have occasionally crossed swords,[3] we generally find common ground on key issues. This time, we have not.
“Robots” is a colloquial rendering for autonomous weapon systems. Human Rights Watch’s position on them is forceful and unambiguous: “[F]ully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-safeguards for civilians.”[4] Therefore, they “should be banned and . . . governments should urgently pursue that end.”[5] In fact, if the systems cannot meet the legal standards cited by Human Rights Watch, then they are already unlawful as such under customary international law irrespective of any policy or treaty law ban on them.[6]
Unfortunately, Losing Humanity obfuscates the on-going legal debate over autonomous weapon systems. A principal flaw in the analysis is a blurring of the distinction between international humanitarian law’s prohibitions on weapons per se and those on the unlawful use of otherwise lawful weapons.[7] Only the former render a weapon illegal as such. To illustrate, a rifle is lawful, but may be used unlawfully, as in shooting a civilian. By contrast, under customary international law, biological weapons are unlawful per se; this is so even if they are used against lawful targets, such as the enemy’s armed forces. The practice of inappropriately conflating these two different strands of international humanitarian law has plagued debates over other weapon systems, most notably unmanned combat aerial systems such as the armed Predator. In addition, some of the report’s legal analysis fails to take account of likely developments in autonomous weapon systems technology or is based on unfounded assumptions as to the nature of the systems. Simply put, much of Losing Humanity is either counter-factual or counter-normative.
This Article is designed to infuse granularity and precision into the legal debates surrounding such weapon systems and their use in the future “battlespace.” It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to international humanitarian law’s prescriptive norms. This Article concludes that Losing Humanity’s recommendation to ban the systems is insupportable as a matter of law, policy, and operational good sense. Human Rights Watch’s analysis sells international humanitarian law short by failing to appreciate how the law tackles the very issues about which the organization expresses concern. Perhaps the most glaring weakness in the recommendation is the extent to which it is premature. No such weapons have even left the drawing board. To ban autonomous weapon systems altogether based on speculation as to their future form is to forfeit any potential uses of them that might minimize harm to civilians and civilian objects when compared to other systems in military arsenals
Research on Deception in Defense of Information Systems
This paper appeared in the Command and Control Research and Technology Symposium, San Diego, CA,
June 2004.Our research group has been broadly studying the use of deliberate deception by software to foil attacks on
information systems. This can provide a second line of defense when access controls have been breached or
against insider attacks. The thousands of new attacks being discovered every year that subvert access
controls say that such a second line of defense is desperately needed. We have developed a number of
demonstration systems, including a fake directory system intended to waste the time of spies, a Web
information resource that delays suspicious requests, a modified file-download utility that pretends to
succumb to a buffer overflow, and a tool for systematically modifying an operating system to insert deceptive
responses. We are also developing an associated theory of deception that can be used to analyze and create
offensive and defensive deceptions, with especial attention to reasoning about time using temporal logic. We
conclude with some discussion of the legal implications of deception by computers.Approved for public release; distribution is unlimited
Recommended from our members
"Out of the loop": autonomous weapon systems and the law of armed conflict
The introduction of autonomous weapon systems into the “battlespace” will profoundly influence the nature of future warfare. This reality has begun to draw the attention of the international legal community, with increasing calls for an outright ban on the use of autonomous weapons systems in armed conflict. This Article is intended to help infuse granularity and precision into the legal debates surrounding such weapon systems and their future uses. It suggests that whereas some conceivable autonomous weapon systems might be prohibited as a matter of law, the use of others will be unlawful only when employed in a manner that runs contrary to the law of armed conflict’s prescriptive norms governing the “conduct of hostilities.” This Article concludes that an outright ban of autonomous weapon systems is insupportable as a matter of law, policy, and operational good sense. Indeed, proponents of a ban underestimate the extent to which the law of armed conflict, including its customary law aspect, will control autonomous weapon system operations. Some autonomous weapon systems that might be developed would already be unlawful per se under existing customary law, irrespective of any treaty ban. The use of certain others would be severely limited by that law.
Furthermore, an outright ban is premature since no such weapons have even left the drawing board. Critics typically either fail to take account of likely developments in autonomous weapon systems technology or base their analysis on unfounded assumptions about the nature of the systems. From a national security perspective, passing on the opportunity to develop these systems before they are fully understood would be irresponsible. Perhaps even more troubling is the prospect that banning autonomous weapon systems altogether based on speculation as to their future form could forfeit their potential use in a manner that would minimize harm to civilians and civilian objects when compared to non-autonomous weapon systems
Cyber Security Active Defense: Playing with Fire or Sound Risk Management
“Banks Remain the Top Target for Hackers, Report Says,” is the title of an April 2013 American Banker article. Yet, no new comprehensive U.S. cyber legislation has been enacted since 2002, and neither legislative history nor the statutory language of the Computer Fraud and Abuse Act (CFAA) or Electronic Communications Privacy Act (ECPA) make reference to the Internet. Courts have nevertheless filled in the gaps—sometimes with surprising results
When is Cyber Defense a Crime? Evaluating ActiveCyber Defense Measures Under theBudapest Convention
As cyberattacks increase in frequency and intensity around the globe, private actors have turned to more innovative cyber defense strategies. For many, this involves considering the use of cutting-edge active cyber defense measures—that is, tactics beyond merely erecting firewalls and installing antivirus software that permit cyber defenders to detect and respond to threats in real time. The legality of such measures under international law is a subject of intense debate because of definitional uncertainty surrounding what qualifies as an “active” cyber defense measure. This Comment argues that active defense measures that do not rise to the level of a cybercrime are permissible under international law. Accordingly, it analyzes the Budapest Convention, the only binding international instrument related to cybercrime, and uses its definition of illegal conduct under international law to construct a “stoplight framework” to guide cyber defenders in their actions. Ultimately, this Comment concludes that cyber defenders have a “green light” to use purely passive measures, such as monitoring one’s own network traffic, because these measures are highly unlikely to involve conduct the Budapest Convention criminalizes. Active-passive measures, such as attaching code to intruders that tracks them back to their home base, can in some cases be justified under exceptions to the Convention; accordingly, cyber defenders should proceed with caution. Finally, outright active defense measures nearly always rise to the level of offense conduct under the Budapest Convention, and should not be used. This analysis provides needed clarity as to the legality of conduct in cyberspace, and provides cyber defenders with the guideposts they need to confidently innovate in today’s complex cyber landscap
Optimizing Lawful Responses to Cyber Intrusions
Cyber intrusions are rarely met with the most effective possible response, less for technical than legal reasons. Different rogue actors (terrorists, criminals, spies, etc.) are governed by overlapping but separate domestic and international legal regimes. Each of these regimes has unique limitations, but also offers unique opportunities for evidence collection, intelligence gathering, and use of force. We propose a framework which automates the mechanistic aspects of the decision-making process, with human intervention for only those legal judgments that necessitate human judgment and official responsibility. The basis of our framework is a pair of decision trees, one executable solely by the threatened system, the other by the attorneys responsible for the lawful pursuit of the intruders. These parallel decision trees are interconnected, and contain pre-distilled legal resources for making an objective, principled determination at each decision point. We offer an open-source development strategy for realizing and maintaining the framework
Cyber Force: The International Legal Implications of the Communication Security Establishment\u27s Expanded Mandate under Bill C-59
Canada is about to join the ranks of Russia, China, Iran, and North Korea; countries with a declared policy and authorized program of state-sponsored cyber attacks. In the summer of 2017, the Liberal Government introduced Bill C-59 An Act 2 Respecting National Security Matters. The bill, if passed, represents the most significant overhaul to Canadian national security institutions since the establishment of the Canadian Security Intelligence Service (CSIS) as a separate organization from the Royal Canadian Mounted Police (RCMP) in 1984. One component of this sweeping reform is the introduction of The Communications Security Establishment Act (CSE Act or the Act). Through the passage of this Act, Canada’s signals intelligence agency, the Communications Security Establishment (CSE or the Establishment) will, for the first time, be constituted under its own legislation. The CSE Act institutes greater oversight and review requirements for this super secret agency, while also dramatically expanding the Establishment’s current tripartite mandate to include defensive cyber operations, active cyber operations, and the provision of technical and operational assistance to the Canadian Armed Forces (CAF)
Autonomous Cyber Capabilities Below and Above the Use of Force Threshold: Balancing Proportionality and the Need for Speed
Protecting the cyber domain requires speedy responses. Mustering that speed will be a task reserved for autonomous cyber agents—software that chooses particular actions without prior human approval. Unfortunately, autonomous agents also suffer from marked deficits, including bias, unintelligibility, and a lack of contextual judgment. Those deficits pose serious challenges for compliance with international law principles such as proportionality.
In the jus ad bellum, jus in bello, and the law of countermeasures, compliance with proportionality reduces harm and the risk of escalation. Autonomous agent flaws will impair their ability to make the fine-grained decisions that proportionality entails. However, a broad prohibition on deployment of autonomous agents is not an adequate answer to autonomy’s deficits. Unduly burdening victim states’ responses to the use of force, the conduct of armed conflict, and breaches of the non-intervention principle will cede the initiative to first movers that violate international law. Stability requires a balance that acknowledges the need for speed in victim state responses while ensuring that those responses remain within reasonable bounds.
The approach taken in this Article seeks to accomplish that goal by requiring victim states to observe feasible precautions in the use of force and countermeasures, as well as the conduct of armed conflict. Those precautions are reconnaissance, coordination, repair, and review. Reconnaissance entails efforts to map an adversary’s network in advance of any incursion by that adversary. Coordination requires the interaction of multiple systems, including one or more that will keep watch on the primary agent. A victim state must also assist through provision of patches and other repairs of third-party states’ networks. Finally, planners must regularly review autonomous agents’ performance and make modifications where appropriate.
These precautions will not ensure compliance with the principle of proportionality for all autonomous cyber agents. But they will both promote compliance and provide victim states with a limited safe harbor: a reasonable margin of appreciation for effects that would otherwise violate the duty of proportionality. That balance will preserve stability in the cyber domain and international law
- …