158 research outputs found

    Use of Robotic Arm as a Tool in Manipulation Under Uncertainty for Dual Arm Systems

    Get PDF

    Optimal Fidelity Selection for Improved Performance in Human-in-the-Loop Queues for Underwater Search

    Full text link
    In the context of human-supervised autonomy, we study the problem of optimal fidelity selection for a human operator performing an underwater visual search task. Human performance depends on various cognitive factors such as workload and fatigue. We perform human experiments in which participants perform two tasks simultaneously: a primary task, which is subject to evaluation, and a secondary task to estimate their workload. The primary task requires participants to search for underwater mines in videos, while the secondary task involves a simple visual test where they respond when a green light displayed on the side of their screens turns red. Videos arrive as a Poisson process and are stacked in a queue to be serviced by the human operator. The operator can choose to watch the video with either normal or high fidelity, with normal fidelity videos playing at three times the speed of high fidelity ones. Participants receive rewards for their accuracy in mine detection for each primary task and penalties based on the number of videos waiting in the queue. We consider the workload of the operator as a hidden state and model the workload dynamics as an Input-Output Hidden Markov Model (IOHMM). We use a Partially Observable Markov Decision Process (POMDP) to learn an optimal fidelity selection policy, where the objective is to maximize total rewards. Our results demonstrate improved performance when videos are serviced based on the optimal fidelity policy compared to a baseline where humans choose the fidelity level themselves

    BWIBots: A platform for bridging the gap between AI and human–robot interaction research

    Get PDF
    Recent progress in both AI and robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially human–robot interaction for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (a) execute action sequences to complete user requests, (b) efficiently ask questions to resolve user requests, (c) understand human commands given in natural language, and (d) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform

    Improving Robotic Decision-Making in Unmodeled Situations

    Get PDF
    Existing methods of autonomous robotic decision-making are often fragile when faced with inaccurate or incompletely modeled distributions of uncertainty, also known as ambiguity. While decision-making under ambiguity is a field of study that has been gaining interest, many existing methods tend to be computationally challenging, require many assumptions about the nature of the problem, and often require much prior knowledge. Therefore, they do not scale well to complex real-world problems where fulfilling all of these requirements is often impractical if not impossible. The research described in this dissertation investigates novel approaches to robotic decision-making strategies which are resilient to ambiguity that are not subject to as many of these requirements as most existing methods. The novel frameworks described in this research incorporate physical feedback, diversity, and swarm local interactions, three factors that are hypothesized to be key in creating resilience to ambiguity. These three factors are inspired by examples of robots which demonstrate resilience to ambiguity, ranging from simple vibrobots to decentralized robotic swarms. The proposed decision-making methods, based around a proposed framework known as Ambiguity Trial and Error (AT&E), are tested for both single robots and robotic swarms in several simulated robotic foraging case studies, and a real-world robotic foraging experiment. A novel method for transferring swarm resilience properties back to single agent decision-making is also explored. The results from the case studies show that the proposed methods demonstrate resilience to varying types of ambiguities, both stationary and non-stationary, while not requiring accurate modeling and assumptions, large amounts of prior training data, or computationally expensive decision-making policy solvers. Conclusions about these novel methods are then drawn from the simulation and experiment results and the future research directions leveraging the lessons learned from this research are discussed

    The Assistive Multi-Armed Bandit

    Full text link
    Learning preferences implicit in the choices humans make is a well studied problem in both economics and computer science. However, most work makes the assumption that humans are acting (noisily) optimally with respect to their preferences. Such approaches can fail when people are themselves learning about what they want. In this work, we introduce the assistive multi-armed bandit, where a robot assists a human playing a bandit task to maximize cumulative reward. In this problem, the human does not know the reward function but can learn it through the rewards received from arm pulls; the robot only observes which arms the human pulls but not the reward associated with each pull. We offer sufficient and necessary conditions for successfully assisting the human in this framework. Surprisingly, better human performance in isolation does not necessarily lead to better performance when assisted by the robot: a human policy can do better by effectively communicating its observed rewards to the robot. We conduct proof-of-concept experiments that support these results. We see this work as contributing towards a theory behind algorithms for human-robot interaction.Comment: Accepted to HRI 201

    Towards Trust-Aware Human-Automation Interaction: An Overview of the Potential of Computational Trust Models

    Get PDF
    Several computational models have been proposed to quantify trust and its relationship to other system variables. However, these models are still under-utilised in human-machine interaction settings due to the gap between modellers’ intent to capture a phenomenon and the requirements for employing the models in a practical context. Our work amalgamates insights from the system modelling, trust, and human-autonomy teaming literature to address this gap. We explore the potential of computational trust models in the development of trust-aware systems by investigating three research questions: 1- At which stages of development can trust models be used by designers? 2- how can trust models contribute to trust-aware systems? 3- which factors should be incorporated within trust models to enhance models’ effectiveness and usability? We conclude with future research directions
    • 

    corecore