1,278 research outputs found

    Quantitative Measures of Regret and Trust in Human-Robot Collaboration Systems

    Get PDF
    Human-robot collaboration (HRC) systems integrate the strengths of both humans and robots to improve the joint system performance. In this thesis, we focus on social human-robot interaction (sHRI) factors and in particular regret and trust. Humans experience regret during decision-making under uncertainty when they feel that a better result could be obtained if chosen differently. A framework to quantitatively measure regret is proposed in this thesis. We embed quantitative regret analysis into Bayesian sequential decision-making (BSD) algorithms for HRC shared vision tasks in both domain search and assembly tasks. The BSD method has been used for robot decision-making tasks, which however is proved to be very different from human decision-making patterns. Instead, regret theory qualitatively models human\u27s rational decision-making behaviors under uncertainty. Moreover, it has been shown that joint performance of a team will improve if all members share the same decision-making logic. Trust plays a critical role in determining the level of a human\u27s acceptance and hence utilization of a robot. A dynamic network based trust model combing the time series trust model is first implemented in a multi-robot motion planning task with a human-in-the-loop. However, in this model, the trust estimates for each robot is independent, which fails to model the correlative trust in multi-robot collaboration. To address this issue, the above model is extended to interdependent multi-robot Dynamic Bayesian Networks

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    Human–agent team dynamics: a review and future research opportunities

    Get PDF
    Humans teaming with intelligent autonomous agents is becoming indispensable in work environments. However, human–agent teams pose significant challenges, as team dynamics are complex arising from the task and social aspects of human–agent interactions. To improve our understanding of human–agent team dynamics, in this article, we conduct a systematic literature review. Drawing on Mathieu et al.’s (2019) teamwork model developed for all-human teams, we map the landscape of research to human–agent team dynamics, including structural features, compositional features, mediating mechanisms, and the interplay of the above features and mechanisms. We reveal that the development of human–agent team dynamics is still nascent, with a particular focus on information sharing, trust development, agents’ human likeness behaviors, shared cognitions, situation awareness, and function allocation. Gaps remain in many areas of team dynamics, such as team processes, adaptability, shared leadership, and team diversity. We offer various interdisciplinary pathways to advance research on human–agent teams

    Behavioral Effects in Consumer Evaluations of Recommendation Systems

    Get PDF

    Behavioral Effects in Consumer Evaluations of Recommendation Systems

    Get PDF

    Towards an Expert System for the Analysis of Computer Aided Human Performance

    Get PDF

    Trust-Based Control of (Semi)Autonomous Mobile Robotic Systems

    Get PDF
    Despite great achievements made in (semi)autonomous robotic systems, human participa-tion is still an essential part, especially for decision-making about the autonomy allocation of robots in complex and uncertain environments. However, human decisions may not be optimal due to limited cognitive capacities and subjective human factors. In human-robot interaction (HRI), trust is a major factor that determines humans use of autonomy. Over/under trust may lead to dispro-portionate autonomy allocation, resulting in decreased task performance and/or increased human workload. In this work, we develop automated decision-making aids utilizing computational trust models to help human operators achieve a more effective and unbiased allocation. Our proposed decision aids resemble the way that humans make an autonomy allocation decision, however, are unbiased and aim to reduce human workload, improve the overall performance, and result in higher acceptance by a human. We consider two types of autonomy control schemes for (semi)autonomous mobile robotic systems. The first type is a two-level control scheme which includes switches between either manual or autonomous control modes. For this type, we propose automated decision aids via a computational trust and self-confidence model. We provide analytical tools to investigate the steady-state effects of the proposed autonomy allocation scheme on robot performance and human workload. We also develop an autonomous decision pattern correction algorithm using a nonlinear model predictive control to help the human gradually adapt to a better allocation pattern. The second type is a mixed-initiative bilateral teleoperation control scheme which requires mixing of autonomous and manual control. For this type, we utilize computational two-way trust models. Here, mixed-initiative is enabled by scaling the manual and autonomous control inputs with a function of computational human-to-robot trust. The haptic force feedback cue sent by the robot is dynamically scaled with a function of computational robot-to-human trust to reduce humans physical workload. Using the proposed control schemes, our human-in-the-loop tests show that the trust-based automated decision aids generally improve the overall robot performance and reduce the operator workload compared to a manual allocation scheme. The proposed decision aids are also generally preferred and trusted by the participants. Finally, the trust-based control schemes are extended to the single-operator-multi-robot applications. A theoretical control framework is developed for these applications and the stability and convergence issues under the switching scheme between different robots are addressed via passivity based measures

    EVALUATING ARTIFICIAL INTELLIGENCE METHODS FOR USE IN KILL CHAIN FUNCTIONS

    Get PDF
    Current naval operations require sailors to make time-critical and high-stakes decisions based on uncertain situational knowledge in dynamic operational environments. Recent tragic events have resulted in unnecessary casualties, and they represent the decision complexity involved in naval operations and specifically highlight challenges within the OODA loop (Observe, Orient, Decide, and Assess). Kill chain decisions involving the use of weapon systems are a particularly stressing category within the OODA loop—with unexpected threats that are difficult to identify with certainty, shortened decision reaction times, and lethal consequences. An effective kill chain requires the proper setup and employment of shipboard sensors; the identification and classification of unknown contacts; the analysis of contact intentions based on kinematics and intelligence; an awareness of the environment; and decision analysis and resource selection. This project explored the use of automation and artificial intelligence (AI) to improve naval kill chain decisions. The team studied naval kill chain functions and developed specific evaluation criteria for each function for determining the efficacy of specific AI methods. The team identified and studied AI methods and applied the evaluation criteria to map specific AI methods to specific kill chain functions.Civilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCaptain, United States Marine CorpsCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments
    corecore