1,434 research outputs found

    Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics

    Get PDF
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robots’ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Techno-elicitation:Regulating behaviour through the design of robots

    Get PDF

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents

    The Artificially Intelligent Trolley Problem: Understanding Our Criminal Law Gaps in a Robot Driven World

    Get PDF
    Not only is Artificial Intelligence (AI) present everywhere in people’s lives, but the technology is also now capable of making unpredictable decisions in novel situations. AI poses issues for the United States’ traditional criminal law system because this system emphasizes mens rea’s importance in determining criminal liability. When AI makes unpredictable decisions that lead to crimes, it will be impractical to determine what mens rea to ascribe to the human agents associated with the technology, such as AI’s creators, owners, and users. To solve this issue, the United States’ legal system must hold AI’s creators, owners, and users strictly liable for their AI’s actions and also create standards that can provide these agents immunity from strict liability. Although other legal scholars have proposed solutions that fit within the United States’ traditional criminal law system, these proposals fail to strike the right balance between encouraging AI’s development and holding someone criminally liable when AI causes harm. This Note illuminates this issue by exploring an artificially intelligent trolley problem. In this problem, an AI-powered self-driving car must decide between running over and killing five pedestrians or swerving out of the way and killing its one passenger; ultimately, the AI decides to kill the five pedestrians. This Note explains why the United States’ traditional criminal law system would struggle to hold the self-driving car’s owner, programmers, and creator liable for the AI’s decision, because of the numerous human agents this problem brings into the criminal liability equation, the impracticality of determining these agents’ mens rea, and the difficulty in satisfying the purposes of criminal punishment. Looking past the artificially intelligent trolley problem, these issues can be extended to most criminal laws that require a mens rea element. Criminal law serves as a powerful method of regulating new technologies, and it is essential that the United States’ criminal law system adapts to solve the issues that AI poses

    Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities

    Get PDF
    The clinical manifestations of platelet dense (δ) granule defects are easy bruising, as well as epistaxis and bleeding after delivery, tooth extractions and surgical procedures. The observed symptoms may be explained either by a decreased number of granules or by a defect in the uptake/release of granule contents. We have developed a method to study platelet dense granule storage and release. The uptake of the fluorescent marker, mepacrine, into the platelet dense granule was measured using flow cytometry. The platelet population was identified by the size and binding of a phycoerythrin-conjugated antibody against GPIb. Cells within the discrimination frame were analysed for green (mepacrine) fluorescence. Both resting platelets and platelets previously stimulated with collagen and the thrombin receptor agonist peptide SFLLRN was analysed for mepacrine uptake. By subtracting the value for mepacrine uptake after stimulation from the value for uptake without stimulation for each individual, the platelet dense granule release capacity could be estimated. Whole blood samples from 22 healthy individuals were analysed. Mepacrine incubation without previous stimulation gave mean fluorescence intensity (MFI) values of 83±6 (mean ± 1 SD, range 69–91). The difference in MFI between resting and stimulated platelets was 28±7 (range 17–40). Six members of a family, of whom one had a known δ-storage pool disease, were analysed. The two members (mother and son) who had prolonged bleeding times also had MFI values disparate from the normal population in this analysis. The values of one daughter with mild bleeding problems but a normal bleeding time were in the lower part of the reference interval
    • …
    corecore