1,443 research outputs found

    Would You Obey an Aggressive Robot: A Human-Robot Interaction Field Study

    Full text link
    © 2018 IEEE. Social Robots have the potential to be of tremendous utility in healthcare, search and rescue, surveillance, transport, and military applications. In many of these applications, social robots need to advise and direct humans to follow important instructions. In this paper, we present the results of a Human-Robot Interaction field experiment conducted using a PR2 robot to explore key factors involved in obedience of humans to social robots. This paper focuses on studying how the human degree of obedience to a robot's instructions is related to the perceived aggression and authority of the robot's behavior. We implemented several social cues to exhibit and convey both authority and aggressiveness in the robot's behavior. In addition to this, we also analyzed the impact of other factors such as perceived anthropomorphism, safety, intelligence and responsibility of the robot's behavior on participants' compliance with the robot's instructions. The results suggest that the degree of perceived aggression in the robot's behavior by different participants did not have a significant impact on their decision to follow the robot's instruction. We have provided possible explanations for our findings and identified new research questions that will help to understand the role of robot authority in human-robot interaction, and that can help to guide the design of robots that are required to provide advice and instructions

    Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology

    Get PDF
    The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1

    Robot friendship: Can a robot be a friend?

    Get PDF

    Attribution of Autonomy and its Role in Robotic Language Acquisition

    Get PDF
    © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.The false attribution of autonomy and related concepts to artificial agents that lack the attributed levels of the respective characteristic is problematic in many ways. In this article we contrast this view with a positive viewpoint that emphasizes the potential role of such false attributions in the context of robotic language acquisition. By adding emotional displays and congruent body behaviors to a child-like humanoid robot’s behavioral repertoire we were able to bring naĂŻve human tutors to engage in so called intent interpretations. In developmental psychology, intent interpretations can be hypothesized to play a central role in the acquisition of emotion, volition, and similar autonomy-related words. The aforementioned experiments originally targeted the acquisition of linguistic negation. However, participants produced other affect- and motivation-related words with high frequencies too and, as a consequence, these entered the robot’s active vocabulary. We will analyze participants’ non-negative emotional and volitional speech and contrast it with participants’ speech in a non-affective baseline scenario. Implications of these findings for robotic language acquisition in particular and artificial intelligence and robotics more generally will also be discussed.Peer reviewedFinal Published versio

    Virtual reality for safe testing and development in collaborative robotics: challenges and perspectives

    Get PDF
    Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations.info:eu-repo/semantics/publishedVersio

    Robot Rights? Let's Talk about Human Welfare Instead

    Get PDF
    The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots 'rights', but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the `robots rights' debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society's least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.Comment: Accepted to the AIES 2020 conference in New York, February 2020. The final version of this paper will appear in Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Societ

    That Does Not Compute: Unpacking the Fembot in American Science Fiction.

    Get PDF
    M.A. Thesis. University of Hawaiʻi at Mānoa 2017

    E.A.I. Anxiety: Technopanic and Post-Human Potential

    Get PDF
    Robots have been a part of the imagination of Western culture for centuries. The possibility for automation and artificial life has inspired the curiosity of thinkers like Leonardo Da Vinci who once designed a mechanical knight. It wasn\u27t until the 19th century that automated machinery has become realized. The confrontation between human and automation has inspired a fear, referred to as technopanic , that has been exacerbated in tandem with the evolution of technology. This thesis seeks to discover the historical precedence for these fears. I explore three modes of knowledge (Philosophy, Economics, and Film Theory) to examine the agendas behind the messages on the topic of Artificial Life, specifically Robots. I then advocate for an alternative philosophy called Post-Humanism. I argue that what is needed to alleviate the fears and anxieties of Western culture is a shift in how humanity views itself and its relation to the natural world. By structuring my thesis in this way, I identify the roots of Western humanity\u27s anthropocentric ontology first, explore the economic implications of automation second, analyze the cultural anticipations of artificial life in Western media third, and finally offer an alternative attitude and ethic as a way out of the pre-established judgments that do little to protect Western culture from E.A.I

    Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics

    Get PDF
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to the reduction of morality to mechanical causation, and for redoubled efforts toward a comprehensive vision of human ethics to guide machine ethicists on the issue of moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, “muddle through” regardless, or give up on the possibility. This paper pursues the first option, meets Tonkens' "challenge" and addresses Wallach's concerns through Beaver's proposed means, by "landscaping" traditional moral theory in resolution of the necessary comprehensive and inclusive account that at once draws into question the stated goals of Machine Ethics, itself
    • 

    corecore