1,714 research outputs found

    The meaning of the EPSRC principles of robotics

    Get PDF
    In revisiting the Principles of Robotics (as we do in this special issue), it is important to carefully consider their full meaning – their history, the intentions behind them, and their actual societal impact to date. Here I address first the meaning of the document as a whole, then of its constituent parts. Further, I describe the nature of policy, and use the Principles as a case study to discuss how government and academia can interact in constructing policy. I defend the Principles and their main themes: that commercially manufactured robots should not be responsible parties under the law, and that users should not be deceived about robots' capacities or moral status. This perspective allows for the incorporation of robots immediately into UK society and law – the objective of the Principles. The Principles were not designed for every conceivable robot, but rather serve in part as design specifications for robots to be incorporated as legal products into British society

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    Reflections on the EPSRC Principles of Robotics from the New Far-Side of the Law

    Get PDF
    The thought-provoking EPSRC Principles of Robotics stem largely from the reflection on the extent to which robots can affect our lives. These comments highlight the fact that, while the principles may address to a good extent the present technological challenges, they appear to be less immediately suited for future technological and conceptual dares. The first part of the paper is dedicated to the search of the definition of what a robot is. Such a definition should offer the basic conceptual platform on which a normative endeavour, aiming to regulate robots in society, should be based. Concluding that the Principles offer no clear yet flexible insight into such a (meta-) definition, which would allow one to take into account the parameters of informed technological imagination and of envisaged social transformation, the second half of the paper highlights a number of regulatory points of tension. Such tensions, it is argued, stem largely from the absence of an appropriate conceptual platform, influencing negatively the extent to which the principles can be effective in guiding social, ethical, legal and scientific conduct

    The EPSRC's policy of responsible innovation from a trading zones perspective

    Get PDF
    Responsible innovation (RI) is gathering momentum as an academic and policy debate linking science and society. Advocates of RI in research policy argue that scientific research should be opened up at an early stage so that many actors and issues can steer innovation trajectories. If this is done, they suggest, new technologies will be more responsible in different ways, better aligned with what society wants, and mistakes of the past will be avoided. This paper analyses the dynamics of RI in policy and practice and makes recommendations for future development. More specifically, we draw on the theory of ‘trading zones’ developed by Peter Galison and use it to analyse two related processes: (i) the development and inclusion of RI in research policy at the UK’s Engineering and Physical Sciences Research Council (EPSRC); (ii) the implementation of RI in relation to the Stratospheric Particle Injection for Climate Engineering (SPICE) project. Our analysis reveals an RI trading zone comprised of three quasi-autonomous traditions of the research domain – applied science, social science and research policy. It also shows how language and expertise are linking and coordinating these traditions in ways shaped by local conditions and the wider context of research. Building on such insights, we argue that a sensible goal for RI policy and practice at this stage is better local coordination of those involved and we suggest ways how this might be achieved

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    Does your electronic butler owe you a duty of confidentiality?

    Get PDF
    As artificial intelligence (AI) advances the legal issues have not progressed in step and principles that exist have become outdated in a relatively short time. Privacy is a major concern and the myriad of devices that store data for wide ranging purposes risk breaches of privacy. Treating such a breach as a design defect or technical fault, does not reflect the complexities of legal liability that apply to robotics. Where advanced levels of AI are involved, such as with electronic butlers and carers used increasingly to assist vulnerable and ageing populations, the question of whether a robot owes a duty of confidentiality to the person for whom they are caring is becoming ever more pertinent. This question is considered in detail and it is concluded that a duty may be owed in some cases. After a brief introduction (I.) the article picks up on the aspects of legal agency and AI (II.) and examines robots as social beeings (III.), their relation- ship to duty (IV.) as well as their capacity as "extended cognition" (V.). These aspects are then brought in con- text with issues of data protection (VI.) and the general relationship between civil law, ethics and robotics (VII.) before conclusions (VIII.) are drawn

    Towards Verifiably Ethical Robot Behaviour

    Full text link
    Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday 25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the workshop proceedings published by AAA

    ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking

    Full text link
    Physical intuition is pivotal for intelligent agents to perform complex tasks. In this paper we investigate the passive acquisition of an intuitive understanding of physical principles as well as the active utilisation of this intuition in the context of generalised object stacking. To this end, we provide: a simulation-based dataset featuring 20,000 stack configurations composed of a variety of elementary geometric primitives richly annotated regarding semantics and structural stability. We train visual classifiers for binary stability prediction on the ShapeStacks data and scrutinise their learned physical intuition. Due to the richness of the training data our approach also generalises favourably to real-world scenarios achieving state-of-the-art stability prediction on a publicly available benchmark of block towers. We then leverage the physical intuition learned by our model to actively construct stable stacks and observe the emergence of an intuitive notion of stackability - an inherent object affordance - induced by the active stacking task. Our approach performs well even in challenging conditions where it considerably exceeds the stack height observed during training or in cases where initially unstable structures must be stabilised via counterbalancing.Comment: revised version to appear at ECCV 201
    • 

    corecore