31,870 research outputs found

    Excuse Me, Something Is Unfair! - Implications of Perceived Fairness of Service Robots

    Get PDF
    Fairness is an important aspect for individuals and teams. This also applies for human-robot interaction (HRI). Especially if intelligent robots provide services to multiple humans, humans may feel treated unfairly by robots. Most work in this area deals with the aspects of fair algorithms, task allocation and decision support. This work focuses on a different, yet little explored perspective, which looks at fairness in HRI from a human-centered perspective in human-robot teams. We present an experiment in which a service robot was responsible for distributing resources among competing team members. We investigated how different strategies of distribution influence the perceived fairness and the perception of the robot. Our study shows that humans might perceive technically efficient algorithms as unfair, especially if humans personally experience negative consequences. This also had negative impact on human perception of the robot, which should be considered in the design of future robots

    Problems and solutions in middle size robot soccer: a review

    Get PDF
    A review of current scientific and technological problems encountered in building and programming middle size soccer robots is made in this paper. Solutions and solution trends to the problems, as presented by different teams, are also examined. Perceptual systems of individual robots, in particular with respect to object location, communications between robot players, decision making with regard to game strategy and behaviour generation, and, finally, actuation, are the topics dealt with. This makes for a wide perspective on the actual state of the art of middle size soccer robots

    Trust-Preserved Human-Robot Shared Autonomy enabled by Bayesian Relational Event Modeling

    Full text link
    Shared autonomy functions as a flexible framework that empowers robots to operate across a spectrum of autonomy levels, allowing for efficient task execution with minimal human oversight. However, humans might be intimidated by the autonomous decision-making capabilities of robots due to perceived risks and a lack of trust. This paper proposed a trust-preserved shared autonomy strategy that allows robots to seamlessly adjust their autonomy level, striving to optimize team performance and enhance their acceptance among human collaborators. By enhancing the relational event modeling framework with Bayesian learning techniques, this paper enables dynamic inference of human trust based solely on time-stamped relational events communicated within human-robot teams. Adopting a longitudinal perspective on trust development and calibration in human-robot teams, the proposed trust-preserved shared autonomy strategy warrants robots to actively establish, maintain, and repair human trust, rather than merely passively adapting to it. We validate the effectiveness of the proposed approach through a user study on a human-robot collaborative search and rescue scenario. The objective and subjective evaluations demonstrate its merits on both task execution and user acceptability over the baseline approach that does not consider the preservation of trust.Comment: Submitted to RA-

    Secure Multi-Robot Adaptive Information Sampling with Continuous, Periodic and Opportunistic Connectivity

    Get PDF
    Multi-robot teams are an increasingly popular approach for information gathering in large geographic areas, with applications in precision agriculture, natural disaster aftermath surveying, and pollution tracking. In a coordinated multi-robot information sampling scenario, robots share their collected information amongst one another to form better predictions. These robot teams are often assembled from untrusted devices, making the verification of the integrity of the collected samples an important challenge. Furthermore, such robots often operate under conditions of continuous, periodic, or opportunistic connectivity and are limited in their energy budget and computational power. In this thesis, we study how to secure the information being shared in a multi-robot network against integrity attacks and the cost of integrating such techniques. We propose a blockchain-based information sharing protocol that allows robots to reject fake data injection by a malicious entity. However, optimal information sampling is a resource-intensive technique, as are the popular blockchain-based consensus protocols. Therefore, we also study its impact on the execution time of the sampling algorithm, which affects the energy spent. We propose algorithms that build on blockchain technology to address the data integrity problem, but also take into account the limitations of the robots’ resources and communication. We evaluate the proposed algorithms along the perspective of the trade-offs between data integrity, model accuracy, and time consumption under continuous, periodic, and opportunistic connectivity

    Effects of spatial ability on multi-robot control tasks

    Get PDF
    Working with large teams of robots is a very complex and demanding task for any operator and individual differences in spatial ability could significantly affect that performance. In the present study, we examine data from two earlier experiments to investigate the effects of ability for perspective-taking on performance at an urban search and rescue (USAR) task using a realistic simulation and alternate displays. We evaluated the participants' spatial ability using a standard measure of spatial orientation and examined the divergence of performance in accuracy and speed in locating victims, and perceived workload. Our findings show operators with higher spatial ability experienced less workload and marked victims more precisely. An interaction was found for the experimental image queue display for which participants with low spatial ability improved significantly in their accuracy in marking victims over the traditional streaming video display. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved
    • …
    corecore