1,121 research outputs found
Recommended from our members
Emergent AI, Social Robots and the Law: Security, Privacy and Policy Issues
The rapid growth of AI systems has implications on a wide variety of fields. It can prove to be a boon to disparate fields such as healthcare, education, global logistics and transportation, to name a few. However, these systems will also bring forth far-reaching changes in employment, economy and security. As AI systems gain acceptance and become more commonplace, certain critical questions arise: What are the legal and security ramifications of the use of these new technologies? Who can use them, and under what circumstances? What is the safety of these systems? Should their commercialization be regulated? What are the privacy issues associated with the use of these technologies? What are the ethical considerations? Who has responsibility for the large amounts of data that is collected and manipulated by these systems? Could these systems fail? What is the recourse if there is a system failure? These questions are but a small subset of possible questions in this key emerging field. In this paper, we focus primarily on the legal questions that relate to the security, privacy, ethical, and policy considerations that emerge from one of these types of technologies, namely social robots. We begin with a history of the field, then go deeper into legal issues, the associated issues of security, privacy and ethics, and consider some solutions to these issues. Finally, we conclude with a look at the future as well as a modest proposal for future research addressing some of the challenges listed
War Torts: Accountability for Autonomous Weapons
Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimesâthe sinking of a cruise ship, the destruction of a village, the downing of a passenger jetâwithout any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law.
Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for âwar tortsâ: serious violations of international humanitarian law that give rise to state responsibility
War Torts: Accountability for Autonomous Weapons
Unlike conventional weapons or remotely operated drones, autonomous weapon systems can independently select and engage targets. As a result, they may take actions that look like war crimesâthe sinking of a cruise ship, the destruction of a village, the downing of a passenger jetâwithout any individual acting intentionally or recklessly. Absent such willful action, no one can be held criminally liable under existing international law.
Criminal law aims to prohibit certain actions, and individual criminal liability allows for the evaluation of whether someone is guilty of a moral wrong. Given that a successful ban on autonomous weapon systems is unlikely (and possibly even detrimental), what is needed is a complementary legal regime that holds states accountable for the injurious wrongs that are the side effects of employing these uniquely effective but inherently unpredictable and dangerous weapons. Just as the Industrial Revolution fostered the development of modern tort law, autonomous weapon systems highlight the need for âwar tortsâ: serious violations of international humanitarian law that give rise to state responsibility
Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics
The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of âelectronic personalityâ. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robotsâ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoningâand vice versaâboth on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions
Can Siri 10.0 Buy Your Home? The Legal and Policy Based Implications of Artificial Intelligent Robots Owning Real Property
This Article addresses whether strong artificial intelligent robots (âAIâ) should receive real property rights. More than a resource, real property promotes self-respect to natural persons such as human beings. Because of this distinction, this Article argues for limited real property rights for AIs. In developing this proposition, it examines three hypotheticals of a strong AI robot in various forms of real property ownership. The first hypothetical determines whether an AI could work as an agent in real property transactions. As robots currently act as agents in various capacities, the groundwork exists for an AI to enter this role. The second hypothetical considers whether an AI could own property in a manner similar to a corporation. In this instance, an AI would own the property in its name, but generate wealth for its shareholders and have oversight by natural persons. Corporations can acquire property as artificial persons, so too AIs could meet similar legal requirements. As such, the law should allow such ownership rights to AIs. The third hypothetical delves into whether an AI should own property outright like a natural person. After describing potential reasons for this approach, this Article explains why legal and policy-based arguments weigh against this extension of property rights to AIs. Instead, any possibility of an AI owning property like a natural person should come from Congress, not the courts
Employed Algorithms: A Labor Model of Corporate Liability for AI
The workforce is digitizing. Leading consultancies estimate that algorithmic systems will replace 45 percent of human-held jobs by 2030. One feature that algorithms share with the human employees they are replacing is their capacity to cause harm. Even today, corporate algorithms discriminate against loan applicants, manipulate stock markets, collude over prices, and cause traffic deaths. Ordinarily, corporate employers would be responsible for these injuries, but the rules for assessing corporate liability arose at a time when only humans could act on behalf of corporations. Those rules apply awkwardly, if at all, to silicon. Some corporations have already discovered this legal loophole and are rapidly automating business functions to limit their own liability risk.
This Article seeks a way to hold corporations accountable for the harms of their digital workforce: some algorithms should be treated, for liability purposes, as corporate employees. Drawing on existing functional characterizations of employment, the Article defines the concept of an âemployed algorithmâ as one over which a corporation exercises substantial control and from which it derives substantial benefits. If a corporation employs an algorithm that causes criminal or civil harm, the corporation should be liable just as if the algorithm were a human employee. Plaintiffs and prosecutors could then leverage existing, employee-focused liability rules to hold corporations accountable when the digital workforce transgresses
- âŠ