7 research outputs found

    Robotic Assistance in Coordination of Patient Care

    Get PDF
    We conducted a study to investigate trust in and dependence upon robotic decision support among nurses and doctors on a labor and delivery floor. There is evidence that suggestions provided by embodied agents engender inappropriate degrees of trust and reliance among humans. This concern is a critical barrier that must be addressed before fielding intelligent hospital service robots that take initiative to coordinate patient care. Our experiment was conducted with nurses and physicians, and evaluated the subjects’ levels of trust in and dependence on high- and low-quality recommendations issued by robotic versus computer-based decision support. The support, generated through action-driven learning from expert demonstration, was shown to produce high-quality recommendations that were ac- cepted by nurses and physicians at a compliance rate of 90%. Rates of Type I and Type II errors were comparable between robotic and computer-based decision support. Furthermore, em- bodiment appeared to benefit performance, as indicated by a higher degree of appropriate dependence after the quality of recommendations changed over the course of the experiment. These results support the notion that a robotic assistant may be able to safely and effectively assist in patient care. Finally, we conducted a pilot demonstration in which a robot assisted resource nurses on a labor and delivery floor at a tertiary care center.National Science Foundation (U.S.) (Grant 2388357

    Dialogue management using reinforcement learning

    Get PDF
    Dialogue has been widely used for verbal communication between human and robot interaction, such as assistant robot in hospital. However, this robot was usually limited by predetermined dialogue, so it will be difficult to understand new words for new desired goal. In this paper, we discussed conversation in Indonesian on entertainment, motivation, emergency, and helping with knowledge growing method. We provided mp3 audio for music, fairy tale, comedy request, and motivation. The execution time for this request was 3.74 ms on average. In emergency situation, patient able to ask robot to call the nurse. Robot will record complaint of pain and inform nurse. From 7 emergency reports, all complaints were successfully saved on database. In helping conversation, robot will walk to pick up belongings of patient. Once the robot did not understand with patient’s conversation, robot will ask until it understands. From asking conversation, knowledge expands from 2 to 10, with learning execution from 1405 ms to 3490 ms. SARSA was faster towards steady state because of higher cumulative rewards. Q-learning and SARSA were achieved desired object within 200 episodes. It concludes that RL method to overcome robot knowledge limitation in achieving new dialogue goal for patient assistant were achieved

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    An Introduction to Causal Inference Methods for Observational Human-Robot Interaction Research

    Full text link
    Quantitative methods in Human-Robot Interaction (HRI) research have primarily relied upon randomized, controlled experiments in laboratory settings. However, such experiments are not always feasible when external validity, ethical constraints, and ease of data collection are of concern. Furthermore, as consumer robots become increasingly available, increasing amounts of real-world data will be available to HRI researchers, which prompts the need for quantative approaches tailored to the analysis of observational data. In this article, we present an alternate approach towards quantitative research for HRI researchers using methods from causal inference that can enable researchers to identify causal relationships in observational settings where randomized, controlled experiments cannot be run. We highlight different scenarios that HRI research with consumer household robots may involve to contextualize how methods from causal inference can be applied to observational HRI research. We then provide a tutorial summarizing key concepts from causal inference using a graphical model perspective and link to code examples throughout the article, which are available at https://gitlab.com/causal/causal_hri. Our work paves the way for further discussion on new approaches towards observational HRI research while providing a starting point for HRI researchers to add causal inference techniques to their analytical toolbox.Comment: 28 page

    Are human-like robots trusted like humans? An investigation into the effect of anthropomorphism on trust in robots measured by expected value as reflected by feedback related negativity and P300

    Get PDF
    Robots are becoming more prevalently used in industry and society. However, in order to ensure effective use of the trust, must be calibrated correctly. Anthropomorphism is one factors which is important in trust in robots (Hancock et al., 2011). Questionnaires and investment games have been used to investigate the impact of anthropomorphism on trust, however, these methods have led to disparate findings. Neurophysiological methods have also been used as an implicit measure of trust. Feedback related negativity (FRN) and P300 are event related potential (ERP) components which have been associated with processes involved in trust such as outcome evaluation. This study uses the trust game (Berg et al., 1995), along with questionnaires and ERP data to investigate trust and expectations towards three agents varying in anthropomorphism, a human, an anthropomorphic robot, and a computer. The behavioural and self-reported findings suggest that the human is perceived as the most trustworthy and there is no difference between the robot and the computer. The ERP data revealed a robot driven difference in FRN and P300 activation, which suggests that robots violated expectations more so than a human or a computer. The present findings are explained in terms of the perfect automation schema and trustworthiness and dominance perceptions. Future research into the impact of voice pitch on dominance and trustworthiness and the impact of trust violations is suggested in order to gain a more holistic picture of the impact of anthropomorphism on trust
    corecore