1,121 research outputs found

    War Torts: Accountability for Autonomous Weapons

    Get PDF
    Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimes—the sinking of a cruise ship, the destruction of a village, the downing of a passenger jet—without any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law. Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for “war torts”: serious violations of international humanitarian law that give rise to state responsibility

    War Torts: Accountability for Autonomous Weapons

    Get PDF
    Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimes—the sinking of a cruise ship, the destruction of a village, the downing of a passenger jet—without any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law. Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for “war torts”: serious violations of international humanitarian law that give rise to state responsibility

    War Torts: Accountability for Autonomous Weapons

    Get PDF

    From automation to autonomous systems: A legal phenomenology with problems of accountability

    Get PDF

    Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics

    Get PDF
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robots’ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions

    Can Siri 10.0 Buy Your Home? The Legal and Policy Based Implications of Artificial Intelligent Robots Owning Real Property

    Get PDF
    This Article addresses whether strong artificial intelligent robots (“AI”) should receive real property rights. More than a resource, real property promotes self-respect to natural persons such as human beings. Because of this distinction, this Article argues for limited real property rights for AIs. In developing this proposition, it examines three hypotheticals of a strong AI robot in various forms of real property ownership. The first hypothetical determines whether an AI could work as an agent in real property transactions. As robots currently act as agents in various capacities, the groundwork exists for an AI to enter this role. The second hypothetical considers whether an AI could own property in a manner similar to a corporation. In this instance, an AI would own the property in its name, but generate wealth for its shareholders and have oversight by natural persons. Corporations can acquire property as artificial persons, so too AIs could meet similar legal requirements. As such, the law should allow such ownership rights to AIs. The third hypothetical delves into whether an AI should own property outright like a natural person. After describing potential reasons for this approach, this Article explains why legal and policy-based arguments weigh against this extension of property rights to AIs. Instead, any possibility of an AI owning property like a natural person should come from Congress, not the courts

    Employed Algorithms: A Labor Model of Corporate Liability for AI

    Get PDF
    The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace 45 percent of human-held jobs by 2030. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, corporate algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk. This Article seeks a way to hold corporations accountable for the harms of their digital workforce: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an “employed algorithm” as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. Plaintiffs and prosecutors could then leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses
    • 

    corecore