2,215 research outputs found

    Hybrid Cloud Model Checking Using the Interaction Layer of HARMS for Ambient Intelligent Systems

    Get PDF
    Soon, humans will be co-living and taking advantage of the help of multi-agent systems in a broader way than the present. Such systems will involve machines or devices of any variety, including robots. These kind of solutions will adapt to the special needs of each individual. However, to the concern of this research effort, systems like the ones mentioned above might encounter situations that will not be seen before execution time. It is understood that there are two possible outcomes that could materialize; either keep working without corrective measures, which could lead to an entirely different end or completely stop working. Both results should be avoided, specially in cases where the end user will depend on a high level guidance provided by the system, such as in ambient intelligence applications. This dissertation worked towards two specific goals. First, to assure that the system will always work, independently of which of the agents performs the different tasks needed to accomplish a bigger objective. Second, to provide initial steps towards autonomous survivable systems which can change their future actions in order to achieve the original final goals. Therefore, the use of the third layer of the HARMS model was proposed to insure the indistinguishability of the actors accomplishing each task and sub-task without regard of the intrinsic complexity of the activity. Additionally, a framework was proposed using model checking methodology during run-time for providing possible solutions to issues encountered in execution time, as a part of the survivability feature of the systems final goals

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime

    Securing communication within the harms model for use with firefighting robots

    Get PDF
    Humans and robots must work together in increasingly complex networks to achieve a common goal. In this research, firefighting robots are a part of a larger, decentralized system of humans, agents, robots, machines, and sensors (HARMS). Although communication in a HARMS model has been utilized in previous research, this new study looks at the security considerations of the communications layer of the HARMS model. A network attack known as a man-in-the-middle attack is successfully demonstrated in this paper. Then, a secure communications protocol is proposed to help provide confidentiality and authentication of HARMS actors. This research is applied to any system that utilizes a HARMS network, including firefighting robots, to help ensure malicious entities cannot exploit communications by system actors. Instead, system actors that confirm their identity can communicate securely in a decentralized way for indistinguishable task completion. The results of this experiment are successful, indicating that secure communication can prevent man-in-the-middle attacks with minor differences in operation

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Responsibility and AI:Council of Europe Study DGI(2019)05

    Get PDF

    Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics?

    Get PDF
    In this chapter we will describe a legal framework for Next Generation Robots (NGRs) that has safety as its central focus. The framework is offered in response to the current lack of clarity regarding robot safety guidelines, despite the development and impending release of tens of thousands of robots into workplaces and homes around the world. We also describ

    Future Work

    Get PDF
    The Industrial Revolution. The Digital Age. These revolutions radically altered the workplace and society. We may be on the cusp of a new era—one that will rival or even surpass these historic disruptions. Technology such as artificial intelligence, robotics, virtual reality, and cutting-edge monitoring devices are developing at a rapid pace. These technologies have already begun to infiltrate the workplace and will continue to do so at ever increasing speed and breadth.This Article addresses the impact of these emerging technologies on the workplace of the present and the future. Drawing upon interviews with leading technologists, the Article explains the basics of these technologies, describes their current applications in the workplace, and predicts how they are likely to develop in the future. It then examines the legal and policy issues implicated by the adoption of technology in the workplace—most notably job losses, employee classification, privacy intrusions, discrimination, safety and health, and impacts on disabled workers. These changes will surely strain a workplace regulatory system that is ill-equipped to handle them. What is unclear is whether the strain will be so great that the system breaks, resulting in a new paradigm of work.Whether or not we are on the brink of a workplace revolution or a more modest evolution, emerging technology will exacerbate the inadequacies of our current workplace laws. This Article discusses possible legislative and judicial reforms designed to ameliorate these problems and stave off the possibility of a collapse that would leave a critical mass of workers without any meaningful protection, power, or voice. The most far-reaching of these options is a proposed “Law of Work” that would address the wide-ranging and interrelated issues posed by these new technologies via a centralized regulatory scheme. This proposal, as well as other more narrowly focused reforms, highlight the major impacts of technology on our workplace laws, underscore both the current and future shortcomings of those laws, and serve as a foundation for further research and discussion on the future of work

    Tackling problems, harvesting benefits: A systematic review of the regulatory debate around AI

    Get PDF
    How to integrate an emerging and all-pervasive technology such as AI into the structures and operations of our society is a question of contemporary politics, science and public debate. It has produced a considerable amount of international academic literature from different disciplines. This article analyzes the academic debate around the regulation of artificial intelligence (AI). The systematic review comprises a sample of 73 peer-reviewed journal articles published between January 1st, 2016, and December 31st, 2020. The analysis concentrates on societal risks and harms, questions of regulatory responsibility, and possible adequate policy frameworks, including risk-based and principle- based approaches. The main interests are proposed regulatory approaches and instruments. Various forms of interventions such as bans, approvals, standard-setting, and disclosure are presented. The assessments of the included papers indicate the complexity of the field, which shows its prematurity and the remaining lack of clarity. By presenting a structured analysis of the academic debate, we contribute both empirically and conceptually to a better understanding of the nexus of AI and regulation and the underlying normative decisions. A comparison of the scientific proposals with the proposed European AI regulation illustrates the specific approach of the regulation, its strengths and weaknesses
    • …
    corecore