40,686 research outputs found

    Regulating Highly Automated Robot Ecologies: Insights from Three User Studies

    Full text link
    Highly automated robot ecologies (HARE), or societies of independent autonomous robots or agents, are rapidly becoming an important part of much of the world's critical infrastructure. As with human societies, regulation, wherein a governing body designs rules and processes for the society, plays an important role in ensuring that HARE meet societal objectives. However, to date, a careful study of interactions between a regulator and HARE is lacking. In this paper, we report on three user studies which give insights into how to design systems that allow people, acting as the regulatory authority, to effectively interact with HARE. As in the study of political systems in which governments regulate human societies, our studies analyze how interactions between HARE and regulators are impacted by regulatory power and individual (robot or agent) autonomy. Our results show that regulator power, decision support, and adaptive autonomy can each diminish the social welfare of HARE, and hint at how these seemingly desirable mechanisms can be designed so that they become part of successful HARE.Comment: 10 pages, 7 figures, to appear in the 5th International Conference on Human Agent Interaction (HAI-2017), Bielefeld, German

    Walking Humanoids for Robotics Research

    Get PDF
    We present three humanoid robots aimed as platforms for research in robotics, and cognitive development in robotics systems. The 'priscilla' robot is a 180cm full scale humanoid, and the mid-size prototype is called 'elvis' and is about 70cm tall. The smallest size humanoid is the 'elvina' type, about 28 cm tall. Two instances of 'elvina' have been built to enable experiments with cooperating humanoids. The underlying ideas and conceptual principles, such as anthropomorphism, embodiment, and mechanisms for learning and adaptivity are introduced as well

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot
    • …
    corecore