168 research outputs found

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Learn and Transfer Knowledge of Preferred Assistance Strategies in Semi-autonomous Telemanipulation

    Full text link
    Enabling robots to provide effective assistance yet still accommodating the operator's commands for telemanipulation of an object is very challenging because robot's assistive action is not always intuitive for human operators and human behaviors and preferences are sometimes ambiguous for the robot to interpret. Although various assistance approaches are being developed to improve the control quality from different optimization perspectives, the problem still remains in determining the appropriate approach that satisfies the fine motion constraints for the telemanipulation task and preference of the operator. To address these problems, we developed a novel preference-aware assistance knowledge learning approach. An assistance preference model learns what assistance is preferred by a human, and a stagewise model updating method ensures the learning stability while dealing with the ambiguity of human preference data. Such a preference-aware assistance knowledge enables a teleoperated robot hand to provide more active yet preferred assistance toward manipulation success. We also developed knowledge transfer methods to transfer the preference knowledge across different robot hand structures to avoid extensive robot-specific training. Experiments to telemanipulate a 3-finger hand and 2-finger hand, respectively, to use, move, and hand over a cup have been conducted. Results demonstrated that the methods enabled the robots to effectively learn the preference knowledge and allowed knowledge transfer between robots with less training effort

    Intent-Recognition-Based Traded Control for Telerobotic Assembly over High-Latency Telemetry

    Get PDF
    As we deploy robotic manipulation systems into unstructured real-world environments, the tasks which those robots are expected to perform grow very quickly in complexity. These tasks require a greater number of possible actions, more variable environmental conditions, and larger varieties of objects and materials which need to be manipulated. This in turn leads to a greater number of ways in which elements of a task can fail. When the cost of task failure is high, such as in the case of surgery or on-orbit robotic interventions, effective and efficient task recovery is essential. Despite ever-advancing capabilities, however, the current and near future state-of-the-art in fully autonomous robotic manipulation is still insufficient for many tasks in these critical applications. Thus, successful application of robotic manipulation in many application domains still necessitates a human operator to directly teleoperate the robots over some communications infrastructure. However, any such infrastructure always incurs some unavoidable round-trip telemetry latency depending on the distances involved and the type of remote environment. While direct teleoperation is appropriate when a human operator is physically close to the robots being controlled, there are still many applications in which such proximity is infeasible. In applications which require a robot to be far from its human operator, this latency can approach the speed of the relevant task dynamics, and performing the task with direct telemanipulation can become increasingly difficult, if not impossible. For example, round-trip delays for ground-controlled on-orbit robotic manipulation can reach multiple seconds depending on the infrastructure used and the location of the remote robot. The goal of this thesis is to advance the state-of-the art in semi-autonomous telemanipulation under multi-second round-trip communications latency between a human operator and remote robot in order to enable more telerobotic applications. We propose a new intent-recognition-based traded control (IRTC) approach which automatically infers operator intent and executes task elements which the human operator would otherwise be unable to perform. What makes our approach more powerful than the current approaches is that we prioritize preserving the operator's direct manual interaction with the remote environment while only trading control over to an autonomous subsystem when the operator-local intent recognition system automatically determines what the operator is trying to accomplish. This enables operators to perform unstructured and a priori unplanned actions in order to quickly recover from critical task failures. Furthermore, this thesis also describes a methodology for introducing and improving semi-autonomous control in critical applications. Specifically, this thesis reports (1) the demonstration of a prototype system for IRTC-based grasp assistance in the context of transatlantic telemetry delays, (2) the development of a systems framework for IRTC in semi-autonomous telemanipulation, and (3) an evaluation of the usability and efficacy of that framework with an increasingly complex assembly task. The results from our human subjects experiments show that, when incorporated with sufficient lower-level capabilities, IRTC is a promising approach to extend the reach and capabilities of on-orbit telerobotics and future in-space operations

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    TELESIM: A Modular and Plug-and-Play Framework for Robotic Arm Teleoperation using a Digital Twin

    Full text link
    We present TELESIM, a modular and plug-and-play framework for direct teleoperation of a robotic arm using a digital twin as the interface between the user and the robotic system. We tested TELESIM by performing a user survey with 37 participants on two different robots using two different control modalities: a virtual reality controller and a finger mapping hardware controller using different grasping systems. Users were asked to teleoperate the robot to pick and place 3 cubes in a tower and to repeat this task as many times as possible in 10 minutes, with only 5 minutes of training beforehand. Our experimental results show that most users were able to succeed by building at least a tower of 3 cubes regardless of the control modality or robot used, demonstrating the user-friendliness of TELESIM

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version

    Sharing and Trading in a Human-Robot System

    Get PDF

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen
    corecore