127 research outputs found

    Intent-Recognition-Based Traded Control for Telerobotic Assembly over High-Latency Telemetry

    Get PDF
    As we deploy robotic manipulation systems into unstructured real-world environments, the tasks which those robots are expected to perform grow very quickly in complexity. These tasks require a greater number of possible actions, more variable environmental conditions, and larger varieties of objects and materials which need to be manipulated. This in turn leads to a greater number of ways in which elements of a task can fail. When the cost of task failure is high, such as in the case of surgery or on-orbit robotic interventions, effective and efficient task recovery is essential. Despite ever-advancing capabilities, however, the current and near future state-of-the-art in fully autonomous robotic manipulation is still insufficient for many tasks in these critical applications. Thus, successful application of robotic manipulation in many application domains still necessitates a human operator to directly teleoperate the robots over some communications infrastructure. However, any such infrastructure always incurs some unavoidable round-trip telemetry latency depending on the distances involved and the type of remote environment. While direct teleoperation is appropriate when a human operator is physically close to the robots being controlled, there are still many applications in which such proximity is infeasible. In applications which require a robot to be far from its human operator, this latency can approach the speed of the relevant task dynamics, and performing the task with direct telemanipulation can become increasingly difficult, if not impossible. For example, round-trip delays for ground-controlled on-orbit robotic manipulation can reach multiple seconds depending on the infrastructure used and the location of the remote robot. The goal of this thesis is to advance the state-of-the art in semi-autonomous telemanipulation under multi-second round-trip communications latency between a human operator and remote robot in order to enable more telerobotic applications. We propose a new intent-recognition-based traded control (IRTC) approach which automatically infers operator intent and executes task elements which the human operator would otherwise be unable to perform. What makes our approach more powerful than the current approaches is that we prioritize preserving the operator's direct manual interaction with the remote environment while only trading control over to an autonomous subsystem when the operator-local intent recognition system automatically determines what the operator is trying to accomplish. This enables operators to perform unstructured and a priori unplanned actions in order to quickly recover from critical task failures. Furthermore, this thesis also describes a methodology for introducing and improving semi-autonomous control in critical applications. Specifically, this thesis reports (1) the demonstration of a prototype system for IRTC-based grasp assistance in the context of transatlantic telemetry delays, (2) the development of a systems framework for IRTC in semi-autonomous telemanipulation, and (3) an evaluation of the usability and efficacy of that framework with an increasingly complex assembly task. The results from our human subjects experiments show that, when incorporated with sufficient lower-level capabilities, IRTC is a promising approach to extend the reach and capabilities of on-orbit telerobotics and future in-space operations

    A task learning mechanism for the telerobots

    Get PDF
    Telerobotic systems have attracted growing attention because of their superiority in the dangerous or unknown interaction tasks. It is very challengeable to exploit such systems to implement complex tasks in an autonomous way. In this paper, we propose a task learning framework to represent the manipulation skill demonstrated by a remotely controlled robot.Gaussian mixture model is utilized to encode and parametrize the smooth task trajectory according to the observations from the demonstrations. After encoding the demonstrated trajectory, a new task trajectory is generated based on the variability information of the learned model. Experimental results have demonstrated the feasibility of the proposed method

    Model-Augmented Haptic Telemanipulation: Concept, Retrospective Overview, and Current Use Cases

    Get PDF
    Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Robotics in Biomedical and Healthcare Engineering

    Get PDF

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    09341 Abstracts Collection -- Cognition, Control and Learning for Robot Manipulation in Human Environments

    Get PDF
    From 16.08. to 21.08.2009, the Dagstuhl Seminar 09341 ``Cognition, Control and Learning for Robot Manipulation in Human Environments \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Safe Haptics-enabled Patient-Robot Interaction for Robotic and Telerobotic Rehabilitation of Neuromuscular Disorders: Control Design and Analysis

    Get PDF
    Motivation: Current statistics show that the population of seniors and the incidence rate of age-related neuromuscular disorders are rapidly increasing worldwide. Improving medical care is likely to increase the survival rate but will result in even more patients in need of Assistive, Rehabilitation and Assessment (ARA) services for extended periods which will place a significant burden on the world\u27s healthcare systems. In many cases, the only alternative is limited and often delayed outpatient therapy. The situation will be worse for patients in remote areas. One potential solution is to develop technologies that provide efficient and safe means of in-hospital and in-home kinesthetic rehabilitation. In this regard, Haptics-enabled Interactive Robotic Neurorehabilitation (HIRN) systems have been developed. Existing Challenges: Although there are specific advantages with the use of HIRN technologies, there still exist several technical and control challenges, e.g., (a) absence of direct interactive physical interaction between therapists and patients; (b) questionable adaptability and flexibility considering the sensorimotor needs of patients; (c) limited accessibility in remote areas; and (d) guaranteeing patient-robot interaction safety while maximizing system transparency, especially when high control effort is needed for severely disabled patients, when the robot is to be used in a patient\u27s home or when the patient experiences involuntary movements. These challenges have provided the motivation for this research. Research Statement: In this project, a novel haptics-enabled telerobotic rehabilitation framework is designed, analyzed and implemented that can be used as a new paradigm for delivering motor therapy which gives therapists direct kinesthetic supervision over the robotic rehabilitation procedure. The system also allows for kinesthetic remote and ultimately in-home rehabilitation. To guarantee interaction safety while maximizing the performance of the system, a new framework for designing stabilizing controllers is developed initially based on small-gain theory and then completed using strong passivity theory. The proposed control framework takes into account knowledge about the variable biomechanical capabilities of the patient\u27s limb(s) in absorbing interaction forces and mechanical energy. The technique is generalized for use for classical rehabilitation robotic systems to realize patient-robot interaction safety while enhancing performance. In the next step, the proposed telerobotic system is studied as a modality of training for classical HIRN systems. The goal is to first model and then regenerate the prescribed kinesthetic supervision of an expert therapist. To broaden the population of patients who can use the technology and HIRN systems, a new control strategy is designed for patients experiencing involuntary movements. As the last step, the outcomes of the proposed theoretical and technological developments are translated to designing assistive mechatronic tools for patients with force and motion control deficits. This study shows that proper augmentation of haptic inputs can not only enhance the transparency and safety of robotic and telerobotic rehabilitation systems, but it can also assist patients with force and motion control deficiencies
    corecore