34 research outputs found

    Artificial Intelligence and the Internet of Things

    Get PDF
    "Through algorithms and artificial intelligence (AI), objects and digital services now demonstrate new skills they did not have before, right up to replacing human activity through pre-programming or by making their own decisions. As part of the internet of things, AI applications are already widely used today, for example in language processing, image recognition and the tracking and processing of data. This policy brief illustrates the potential negative and positive impacts of AI and reviews related policy strategies adopted by the UK, US, EU, as well as Canada and China. Based on an ethical approach that considers the role of AI from a democratic perspective and considering the public interest, the authors make policy recommendations that help to strengthen the positive impact of AI and to mitigate its negative consequences.

    Artificial Intelligence and the Internet of Things

    Get PDF
    "Through algorithms and artificial intelligence (AI), objects and digital services now demonstrate new skills they did not have before, right up to replacing human activity through pre-programming or by making their own decisions. As part of the internet of things, AI applications are already widely used today, for example in language processing, image recognition and the tracking and processing of data. This policy brief illustrates the potential negative and positive impacts of AI and reviews related policy strategies adopted by the UK, US, EU, as well as Canada and China. Based on an ethical approach that considers the role of AI from a democratic perspective and considering the public interest, the authors make policy recommendations that help to strengthen the positive impact of AI and to mitigate its negative consequences.

    A checklist for safe robot swarms

    Get PDF

    Practical Challenges in Explicit Ethical Machine Reasoning

    Get PDF
    We examine implemented systems for ethical machine reasoning with a view to identifying the practical challenges (as opposed to philosophical challenges) posed by the area. We identify a need for complex ethical machine reasoning not only to be multi-objective, proactive, and scrutable but that it must draw on heterogeneous evidential reasoning. We also argue that, in many cases, it needs to operate in real time and be verifiable. We propose a general architecture involving a declarative ethical arbiter which draws upon multiple evidential reasoners each responsible for a particular ethical feature of the system's environment. We claim that this architecture enables some separation of concerns among the practical challenges that ethical machine reasoning poses

    An Abstract Architecture for Explainable Autonomy in Hazardous Environments

    Get PDF
    Autonomous robotic systems are being proposed for use in hazardous environments, often to reduce the risks to human workers. In the immediate future, it is likely that human workers will continue to use and direct these autonomous robots, much like other computerised tools but with more sophisticated decision-making. Therefore, one important area on which to focus engineering effort is ensuring that these users trust the system. Recent literature suggests that explainability is closely related to how trustworthy a system is. Like safety and security properties, explainability should be designed into a system, instead of being added afterwards. This paper presents an abstract architecture that supports an autonomous system explaining its behaviour (explainable autonomy), providing a design template for implementing explainable autonomous systems. We present a worked example of how our architecture could be applied in the civil nuclear industry, where both workers and regulators need to trust the system’s decision-making capabilities

    An ‘Ethical Black Box’, Learning From Disagreement in Shared Control Systems

    Get PDF
    Shared control, where a human user cooperates with an algorithm to operate a device, has the potential to greatly expand access to powered mobility, but also raises unique ethical challenges. A shared-control wheelchair may perform actions that do not reflect its user’s intent in order to protect their safety, causing frustration or distrust in the process. Unlike physical accidents there is currently no framework for investigating or adjudicating these events, leading to a reduced capability to improve the shared control algorithm’s user experience. In this paper we suggest a system based on the idea of an ‘ethical black box’ that records the sensor context of sub-critical disagreements and collision risks in order to allow human investigators to examine them in retrospect and assess whether the algorithm has taken control from the user without justification

    Ethical governance is essential to building trust in robotics and artificial intelligence systems

    Get PDF
    © 2018 The Author(s) Published by the Royal Society. All rights reserved. This paper explores the question of ethical governance for robotics and artificial intelligence (AI) systems. We outline a roadmap-which links a number of elements, including ethics, standards, regulation, responsible research and innovation, and public engagement-as a framework to guide ethical governance in robotics and AI. We argue that ethical governance is essential to building public trust in robotics and AI, and conclude by proposing five pillars of good ethical governance. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'

    Confounding Complexity of Machine Action: A Hobbesian Account of Machine Responsibility

    Get PDF
    In this article, the core concepts in Thomas Hobbes’s framework of representation and responsibility are applied to the question of machine responsibility and the responsibility gap and the retribution gap. The method is philosophical analysis and involves the application of theories from political theory to the ethics of technology. A veil of complexity creates the illusion that machine actions belong to a mysterious and unpredictable domain, and some argue that this unpredictability absolves designers of responsibility. Such a move would create a moral hazard related to both (a) strategically increasing unpredictability and (b) taking more risk if responsible humans do not have to bear the costs of the risks they create. Hobbes’s theory allows for the clear and arguably fair attribution of action while allowing for necessary development and innovation. Innovation will be allowed as long as it is compatible with social order and provided the beneficial effects outweigh concerns about increased risk. Questions of responsibility are here considered to be political questions.publishedVersio
    corecore