32 research outputs found

    The classification and new trends of shared control strategies in telerobotic systems: A survey

    Get PDF
    Shared control, which permits a human operator and an autonomous controller to share the control of a telerobotic system, can reduce the operator's workload and/or improve performances during the execution of tasks. Due to the great benefits of combining the human intelligence with the higher power/precision abilities of robots, the shared control architecture occupies a wide spectrum among telerobotic systems. Although various shared control strategies have been proposed, a systematic overview to tease out the relation among different strategies is still absent. This survey, therefore, aims to provide a big picture for existing shared control strategies. To achieve this, we propose a categorization method and classify the shared control strategies into 3 categories: Semi-Autonomous control (SAC), State-Guidance Shared Control (SGSC), and State-Fusion Shared Control (SFSC), according to the different sharing ways between human operators and autonomous controllers. The typical scenarios in using each category are listed and the advantages/disadvantages and open issues of each category are discussed. Then, based on the overview of the existing strategies, new trends in shared control strategies, including the “autonomy from learning” and the “autonomy-levels adaptation,” are summarized and discussed

    Towards Reuse and Recycling of Lithium-ion Batteries: Tele-robotics for Disassembly of Electric Vehicle Batteries

    Full text link
    Disassembly of electric vehicle batteries is a critical stage in recovery, recycling and re-use of high-value battery materials, but is complicated by limited standardisation, design complexity, compounded by uncertainty and safety issues from varying end-of-life condition. Telerobotics presents an avenue for semi-autonomous robotic disassembly that addresses these challenges. However, it is suggested that quality and realism of the user's haptic interactions with the environment is important for precise, contact-rich and safety-critical tasks. To investigate this proposition, we demonstrate the disassembly of a Nissan Leaf 2011 module stack as a basis for a comparative study between a traditional asymmetric haptic-'cobot' master-slave framework and identical master and slave cobots based on task completion time and success rate metrics. We demonstrate across a range of disassembly tasks a time reduction of 22%-57% is achieved using identical cobots, yet this improvement arises chiefly from an expanded workspace and 1:1 positional mapping, and suffers a 10-30% reduction in first attempt success rate. For unbolting and grasping, the realism of force feedback was comparatively less important than directional information encoded in the interaction, however, 1:1 force mapping strengthened environmental tactile cues for vacuum pick-and-place and contact cutting tasks.Comment: 21 pages, 12 figures, Submitted to Frontiers in Robotics and AI; Human-Robot Interactio

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Expert-in-the-Loop Multilateral Telerobotics for Haptics-Enabled Motor Function and Skills Development

    Get PDF
    Among medical robotics applications are Robotics-Assisted Mirror Rehabilitation Therapy (RAMRT) and Minimally-Invasive Surgical Training (RAMIST) that extensively rely on motor function development. Haptics-enabled expert-in-the-loop motor function development for such applications is made possible through multilateral telerobotic frameworks. While several studies have validated the benefits of haptic interaction with an expert in motor learning, contradictory results have also been reported. This emphasizes the need for further in-depth studies on the nature of human motor learning through haptic guidance and interaction. The objective of this study was to design and evaluate expert-in-the-loop multilateral telerobotic frameworks with stable and human-safe control loops that enable adaptive “hand-over-hand” haptic guidance for RAMRT and RAMIST. The first prerequisite for such frameworks is active involvement of the patient or trainee, which requires the closed-loop system to remain stable in the presence of an adaptable time-varying dominance factor. To this end, a wave-variable controller is proposed in this study for conventional trilateral teleoperation systems such that system stability is guaranteed in the presence of a time-varying dominance factor and communication delay. Similar to other wave-variable approaches, the controller is initially developed for the Velocity-force Domain (VD) based on the well-known passivity assumption on the human arm in VD. The controller can be applied straightforwardly to the Position-force Domain (PD), eliminating position-error accumulation and position drift, provided that passivity of the human arm in PD is addressed. However, the latter has been ignored in the literature. Therefore, in this study, passivity of the human arm in PD is investigated using mathematical analysis, experimentation as well as user studies involving 12 participants and 48 trials. The results, in conjunction with the proposed wave-variables, can be used to guarantee closed-loop PD stability of the supervised trilateral teleoperation system in its classical format. The classic dual-user teleoperation architecture does not, however, fully satisfy the requirements for properly imparting motor function (skills) in RAMRT (RAMIST). Consequently, the next part of this study focuses on designing novel supervised trilateral frameworks for providing motor learning in RAMRT and RAMIST, each customized according to the requirements of the application. The framework proposed for RAMRT includes the following features: a) therapist-in-the-loop mirror therapy; b) haptic feedback to the therapist from the patient side; c) assist-as-needed therapy realized through an adaptive Guidance Virtual Fixture (GVF); and d) real-time task-independent and patient-specific motor-function assessment. Closed-loop stability of the proposed framework is investigated using a combination of the Circle Criterion and the Small-Gain Theorem. The stability analysis addresses the instabilities caused by: a) communication delays between the therapist and the patient, facilitating haptics-enabled tele- or in-home rehabilitation; and b) the integration of the time-varying nonlinear GVF element into the delayed system. The platform is experimentally evaluated on a trilateral rehabilitation setup consisting of two Quanser rehabilitation robots and one Quanser HD2 robot. The framework proposed for RAMIST includes the following features: a) haptics-enabled expert-in-the-loop surgical training; b) adaptive expertise-oriented training, realized through a Fuzzy Interface System, which actively engages the trainees while providing them with appropriate skills-oriented levels of training; and c) task-independent skills assessment. Closed-loop stability of the architecture is analyzed using the Circle Criterion in the presence and absence of haptic feedback of tool-tissue interactions. In addition to the time-varying elements of the system, the stability analysis approach also addresses communication delays, facilitating tele-surgical training. The platform is implemented on a dual-console surgical setup consisting of the classic da Vinci surgical system (Intuitive Surgical, Inc., Sunnyvale, CA), integrated with the da Vinci Research Kit (dVRK) motor controllers, and the dV-Trainer master console (Mimic Technology Inc., Seattle, WA). In order to save on the expert\u27s (therapist\u27s) time, dual-console architectures can also be expanded to accommodate simultaneous training (rehabilitation) for multiple trainees (patients). As the first step in doing this, the last part of this thesis focuses on the development of a multi-master/single-slave telerobotic framework, along with controller design and closed-loop stability analysis in the presence of communication delays. Various parts of this study are supported with a number of experimental implementations and evaluations. The outcomes of this research include multilateral telerobotic testbeds for further studies on the nature of human motor learning and retention through haptic guidance and interaction. They also enable investigation of the impact of communication time delays on supervised haptics-enabled motor function improvement through tele-rehabilitation and mentoring

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Monitoring companion for industrial robotic processes

    Get PDF
    For system integrators, optimizing complex industrial robotic applications (e.g. robotised welding) is a difficult and time-consuming task. This procedure is rendered tedious and often very hard to achieve when the operator cannot access the robotic system once in operation, perhaps because the installation is far away or because of the operational environment. In these circumstances, as an alternative to physically visiting the installation site, the system integrator may rely on additional nearby sensors to remotely acquire the necessary process information. While it is hard to completely replace this trial and error approach, it is possible to provide a way to gather process information more effectively that can be used in several robotic installations.This thesis investigates the use of a "monitoring robot" in addition to the task robot(s) that belong to the industrial process to be optimized. The monitoring robot can be equipped with several different sensors and can be moved into close proximity of any installed task robot so that it can be used to collect information from that process during and/or after the operation without interfering. The thesis reviews related work in the industry and in the field of teleoperation to identify the most important challenges in remote monitoring and teleoperation. From the background investigation it is clear that two very important issues are: i) the nature of the teleoperator’s interface and; ii) the efficiency of the shared control between the human operator and the monitoring system. In order to investigate these two issues efficiently it was necessary to create experimental scenarios that operate independently from any application scenario, so an abstract problem domain is created. This way the monitoring system's control and interface can be evaluated in a context that presents challenges that are typical of a remote monitoring task but are not application domain specific. Therefore the validity of the proposed approach can be assessed from a generic and, therefore, more powerful and widely applicable perspective. The monitoring framework developed in this thesis is described, both in the shared control design choices based on virtual fixtures (VF) and the implementation in a 3D visualization environment. The monitoring system developed is evaluated with a usability study with user participants. The usability study aims at assessing the system's performance along with its acceptance and ease of use in a static monitoring task, accompanied by user\hyp{}filled TLX questionnaires. Since future work will apply this system in real robotic welding scenarios, this thesis finally reports some preliminary work in such an application

    Robot manipulator skill learning and generalising through teleoperation

    Get PDF
    Robot manipulators have been widely used for simple repetitive, and accurate tasks in industrial plants, such as pick and place, assembly and welding etc., but it is still hard to deploy in human-centred environments for dexterous manipulation tasks, such as medical examination and robot-assisted healthcare. These tasks are not only related to motion planning and control but also to the compliant interaction behaviour of robots, e.g. motion control, force regulation and impedance adaptation simultaneously under dynamic and unknown environments. Recently, with the development of collaborative robotics (cobots) and machine learning, robot skill learning and generalising have attained increasing attention from robotics, machine learning and neuroscience communities. Nevertheless, learning complex and compliant manipulation skills, such as manipulating deformable objects, scanning the human body and folding clothes, is still challenging for robots. On the other hand, teleoperation, also namely remote operation or telerobotics, has been an old research area since 1950, and there have been a number of applications such as space exploration, telemedicine, marine vehicles and emergency response etc. One of its advantages is to combine the precise control of robots with human intelligence to perform dexterous and safety-critical tasks from a distance. In addition, telepresence allows remote operators could feel the actual interaction between the robot and the environment, including the vision, sound and haptic feedback etc. Especially under the development of various augmented reality (AR), virtual reality (VR) and wearable devices, intuitive and immersive teleoperation have received increasing attention from robotics and computer science communities. Thus, various human-robot collaboration (HRC) interfaces based on the above technologies were developed to integrate robot control and telemanipulation by human operators for robot skills learning from human beings. In this context, robot skill learning could benefit teleoperation by automating repetitive and tedious tasks, and teleoperation demonstration and interaction by human teachers also allow the robot to learn progressively and interactively. Therefore, in this dissertation, we study human-robot skill transfer and generalising through intuitive teleoperation interfaces for contact-rich manipulation tasks, including medical examination, manipulating deformable objects, grasping soft objects and composite layup in manufacturing. The introduction, motivation and objectives of this thesis are introduced in Chapter 1. In Chapter 2, a literature review on manipulation skills acquisition through teleoperation is carried out, and the motivation and objectives of this thesis are discussed subsequently. Overall, the main contents of this thesis have three parts: Part 1 (Chapter 3) introduces the development and controller design of teleoperation systems with multimodal feedback, which is the foundation of this project for robot learning from human demonstration and interaction. In Part 2 (Chapters 4, 5, 6 and 7), we studied primitive skill library theory, behaviour tree-based modular method, and perception-enhanced method to improve the generalisation capability of learning from the human demonstrations. And several applications were employed to evaluate the effectiveness of these methods.In Part 3 (Chapter 8), we studied the deep multimodal neural networks to encode the manipulation skill, especially the multimodal perception information. This part conducted physical experiments on robot-assisted ultrasound scanning applications.Chapter 9 summarises the contributions and potential directions of this thesis. Keywords: Learning from demonstration; Teleoperation; Multimodal interface; Human-in-the-loop; Compliant control; Human-robot interaction; Robot-assisted sonography

    Robotic manipulators for single access surgery

    Get PDF
    This thesis explores the development of cooperative robotic manipulators for enhancing surgical precision and patient outcomes in single-access surgery and, specifically, Transanal Endoscopic Microsurgery (TEM). During these procedures, surgeons manipulate a heavy set of instruments via a mechanical clamp inserted in the patient’s body through a surgical port, resulting in imprecise movements, increased patient risks, and increased operating time. Therefore, an articulated robotic manipulator with passive joints is initially introduced, featuring built-in position and force sensors in each joint and electronic joint brakes for instant lock/release capability. The articulated manipulator concept is further improved with motorised joints, evolving into an active tool holder. The joints allow the incorporation of advanced robotic capabilities such as ultra-lightweight gravity compensation and hands-on kinematic reconfiguration, which can optimise the placement of the tool holder in the operating theatre. Due to the enhanced sensing capabilities, the application of the active robotic manipulator was further explored in conjunction with advanced image guidance approaches such as endomicroscopy. Recent advances in probe-based optical imaging such as confocal endomicroscopy is making inroads in clinical uses. However, the challenging manipulation of imaging probes hinders their practical adoption. Therefore, a combination of the fully cooperative robotic manipulator with a high-speed scanning endomicroscopy instrument is presented, simplifying the incorporation of optical biopsy techniques in routine surgical workflows. Finally, another embodiment of a cooperative robotic manipulator is presented as an input interface to control a highly-articulated robotic instrument for TEM. This master-slave interface alleviates the drawbacks of traditional master-slave devices, e.g., using clutching mechanics to compensate for the mismatch between slave and master workspaces, and the lack of intuitive manipulation feedback, e.g. joint limits, to the user. To address those drawbacks a joint-space robotic manipulator is proposed emulating the kinematic structure of the flexible robotic instrument under control.Open Acces
    corecore