5,741 research outputs found

    Theory of Robot Communication: II. Befriending a Robot over Time

    Full text link
    In building on theories of Computer-Mediated Communication (CMC), Human-Robot Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the current paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human-Robot Communication comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of Computer-Mediated Communication, Human-Robot Communication, and Media Psychology, outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a robot over time. arXiv:cs, 2502572(v1), 1-2

    RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction

    Full text link
    Robots have potential to revolutionize the way we interact with the world around us. One of their largest potentials is in the domain of mobile health where they can be used to facilitate clinical interventions. However, to accomplish this, robots need to have access to our private data in order to learn from these data and improve their interaction capabilities. Furthermore, to enhance this learning process, the knowledge sharing among multiple robot units is the natural step forward. However, to date, there is no well-established framework which allows for such data sharing while preserving the privacy of the users (e.g., the hospital patients). To this end, we introduce RoboChain - the first learning framework for secure, decentralized and computationally efficient data and model sharing among multiple robot units installed at multiple sites (e.g., hospitals). RoboChain builds upon and combines the latest advances in open data access and blockchain technologies, as well as machine learning. We illustrate this framework using the example of a clinical intervention conducted in a private network of hospitals. Specifically, we lay down the system architecture that allows multiple robot units, conducting the interventions at different hospitals, to perform efficient learning without compromising the data privacy.Comment: 7 pages, 6 figure

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Implicit Attitudes Towards Robots Predict Explicit Attitudes, Semantic Distance Between Robots and Humans, Anthropomorphism, and Prosocial Behavior: From Attitudes to Human–Robot Interaction

    Get PDF
    AbstractHow people behave towards others relies, to a large extent, on the prior attitudes that they hold towards them. In Human–Robot Interactions, individual attitudes towards robots have mostly been investigated via explicit reports that can be biased by various conscious processes. In the present study, we introduce an implicit measure of attitudes towards robots. The task utilizes the measure of semantic priming to evaluate whether participants consider humans and robots as similar or different. Our results demonstrate a link between implicit semantic distance between humans and robots and explicit attitudes towards robots, explicit semantic distance between robots and humans, perceived robot anthropomorphism, and pro/anti-social behavior towards a robot in a real life, interactive scenario. Specifically, attenuated semantic distance between humans and robots in the implicit task predicted more positive explicit attitudes towards robots, attenuated explicit semantic distance between humans and robots, attribution of an anthropomorphic characteristic, and consequently a future prosocial behavior towards a robot. Crucially, the implicit measure of attitudes towards robots (implicit semantic distance) was a better predictor of a future behavior towards the robot than explicit measure of attitudes towards robots (self-reported attitudes). Cumulatively, the current results emphasize a new approach to measure implicit attitudes towards robots, and offer a starting point for further investigations of implicit processing of robots

    Implications from Responsible Human-Robot Interaction with Anthropomorphic Service Robots for Design Science

    Get PDF
    Accelerated by the COVID-19 pandemic, anthropomorphic service robots are continuously penetrating various domains of our daily lives. With this development, the urge for an interdisciplinary approach to responsibly design human-robot interaction (HRI), with particular attention to human dignity, privacy, compliance, and transparency, increases. This paper contributes to design science, in developing a new artifact, i.e., an interdisciplinary framework for designing responsible HRI with anthropomorphic service robots, which covers the three design science research cycles. Furthermore, we propose a multi-method approach by applying this interdisciplinary framework. Thereby, our finding offer implications for designing HRI in a responsible manner
    • 

    corecore