46 research outputs found

    Excuse Me, Something Is Unfair! - Implications of Perceived Fairness of Service Robots

    Get PDF
    Fairness is an important aspect for individuals and teams. This also applies for human-robot interaction (HRI). Especially if intelligent robots provide services to multiple humans, humans may feel treated unfairly by robots. Most work in this area deals with the aspects of fair algorithms, task allocation and decision support. This work focuses on a different, yet little explored perspective, which looks at fairness in HRI from a human-centered perspective in human-robot teams. We present an experiment in which a service robot was responsible for distributing resources among competing team members. We investigated how different strategies of distribution influence the perceived fairness and the perception of the robot. Our study shows that humans might perceive technically efficient algorithms as unfair, especially if humans personally experience negative consequences. This also had negative impact on human perception of the robot, which should be considered in the design of future robots

    What-is and How-to for Fairness in Machine Learning: A Survey, Reflection, and Perspective

    Full text link
    Algorithmic fairness has attracted increasing attention in the machine learning community. Various definitions are proposed in the literature, but the differences and connections among them are not clearly addressed. In this paper, we review and reflect on various fairness notions previously proposed in machine learning literature, and make an attempt to draw connections to arguments in moral and political philosophy, especially theories of justice. We also consider fairness inquiries from a dynamic perspective, and further consider the long-term impact that is induced by current prediction and decision. In light of the differences in the characterized fairness, we present a flowchart that encompasses implicit assumptions and expected outcomes of different types of fairness inquiries on the data generating process, on the predicted outcome, and on the induced impact, respectively. This paper demonstrates the importance of matching the mission (which kind of fairness one would like to enforce) and the means (which spectrum of fairness analysis is of interest, what is the appropriate analyzing scheme) to fulfill the intended purpose

    Exploring Diversity and Fairness in Machine Learning

    Get PDF
    With algorithms, artificial intelligence, and machine learning becoming ubiquitous in our society, we need to start thinking about the implications and ethical concerns of new machine learning models. In fact, two types of biases that impact machine learning models are social injustice bias (bias created by society) and measurement bias (bias created by unbalanced sampling). Biases against groups of individuals found in machine learning models can be mitigated through the use of diversity and fairness constraints. This dissertation introduces models to help humans make decisions by enforcing diversity and fairness constraints. This work starts with a call to action. Bias is rife in hiring, and since algorithms are being used in multiple companies to filter applicants, we need to pay special attention to this application. Inspired by this hiring application, I introduce new multi-armed bandit frameworks to help assign human resources in the hiring process while enforcing diversity through a submodular utility function. These frameworks increase diversity while using less resources compared to original admission decisions of the Computer Science graduate program at the University of Maryland. Moving outside of hiring I present a contextual multi-armed bandit algorithm that enforces group fairness by learning a societal bias term and correcting for it. This algorithm is tested on two real world datasets and shows marked improvement over other in-use algorithms. Additionally I take a look at fairness in traditional machine learning domain adaptation. I provide the first theoretical analysis of this setting and test the resulting model on two deal world datasets. Finally I explore extensions to my core work, delving into suicidality, comprehension of fairness definitions, and student evaluations

    Route Planning and Operator Allocation in Robot Fleets

    Get PDF
    In this thesis, we address various challenges related to optimal planning and task allocation in a robot fleet supervised by remote human operators. The overarching goal is to enhance the performance and efficiency of the robot fleets by planning routes and scheduling operator assistance while accounting for limited human availability. The thesis consists of three main problems, each of which focuses on a specific aspect of the system. The first problem pertains to optimal planning for a robot in a collaborative human-robot team, where the human supervisor is intermittently available to assist the robot to complete its tasks faster. Specifically, we address the challenge of computing the fastest route between two configurations in an environment with time constraints on how long the robot can wait for assistance at intermediate configurations. We consider the application of robot navigation in a city environment, where different routes can have distinct speed limits and different time constraints on how long a robot is allowed to wait. Our proposed approach utilizes the concepts of budget and critical departure times, enabling optimal solution and enhanced scalability compared to existing methods. Extensive comparisons with baseline algorithms on a city road network demonstrate its effectiveness and ability to achieve high-quality solutions. Furthermore, we extend the problem to the multi-robot case, where the challenge lies in prioritizing robots when multiple service requests arrive simultaneously. To address this challenge, we present a greedy algorithm that efficiently prioritizes service requests in a batch and has a remarkably good performance compared to the optimal solution. The next problem focuses on allocating human operators to robots in a fleet, considering each robot's specified route and the potential for failures and getting stuck. Conventional techniques used to solve such problems face scalability issues due to exponential growth of state and action spaces with the number of robots and operators. To overcome these, we derive conditions for a technical requirement called indexability, thereby enabling the use of the Whittle index heuristic. Our key insight is to leverage the structure of the value function of individual robots, resulting in conditions that can be easily verified separately for each state of each robot. We apply these conditions to two types of transitions commonly seen in supervised robot fleets. Through numerical simulations, we demonstrate the efficacy of Whittle index policy as a near-optimal scalable approach that outperforms existing scalable methods. Finally, we investigate the impact of interruptions on human supervisors overseeing a fleet of robots. Human supervisors in such systems are primarily responsible for monitoring robots, but can also be assigned with secondary tasks. These tasks can act as interruptions and can be categorized as either intrinsic, i.e., being directly related to the monitoring task, or extrinsic, i.e., being unrelated. Through a user study involving 3939 participants, the findings reveal that task performance remains relatively unaffected by interruptions, and is primarily dependent on the number of robots being monitored. However, extrinsic interruptions led to a significant increase in perceived workload, creating challenges in switching between tasks. These results highlight the importance of managing user workload by limiting extrinsic interruptions in such supervision systems. Overall, this thesis contributes to the field of robot planning and operator allocation in collaborative human-robot teams. By incorporating human assistance, addressing scalability challenges, and understanding the impact of interruptions, we aim to enhance the performance and usability of robot fleets. Our work introduces optimal planning methods and efficient allocation strategies, empowering the seamless operation of robot fleets in real-world scenarios. Additionally, we provide valuable insights into user workload, shedding light on the interactions between humans and robots in such systems. We hope that our research promotes the widespread adoption of robot fleets and facilitates their integration into various domains, ultimately driving advancements in the field

    A Survey of Multi-Agent Human-Robot Interaction Systems

    Full text link
    This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202
    corecore