157 research outputs found

    Haptic guidance for microrobotic intracellular injection

    Full text link
    The ability for a bio-operator to utilise a haptic device to manipulate a microrobot for intracellular injection offers immense benefits. One significant benefit is for the bio-operator to receive haptic guidance while performing the injection process. In order to address this, this paper investigates the use of haptic virtual fixtures for cell injection and proposes a novel force field virtual fixture. The guidance force felt by the bio-operator is determined by force field analysis within the virtual fixture. The proposed force field virtual fixture assists the bio-operator when performing intracellular injection by limiting the micropipette tip\u27s motion to a conical volume as well as recommending the desired path for optimal injection. A virtual fixture plane is also introduced to prevent the bio-operator from moving the micropipette tip beyond the deposition target inside the cell. Simulation results demonstrate the operation of the guidance system.<br /

    Multi-robot cooperative platform : a task-oriented teleoperation paradigm

    Get PDF
    This thesis proposes the study and development of a teleoperation system based on multi-robot cooperation under the task oriented teleoperation paradigm: Multi-Robot Cooperative Paradigm, MRCP. In standard teleoperation, the operator uses the master devices to control the remote slave robot arms. These arms reproduce the desired movements and perform the task. With the developed work, the operator can virtually manipulate an object. MRCP automatically generates the arms orders to perform the task. The operator does not have to solve situations arising from possible restrictions that the slave arms may have. The research carried out is therefore aimed at improving the accuracy teleoperation tasks in complex environments, particularly in the field of robot assisted minimally invasive surgery. This field requires patient safety and the workspace entails many restrictions to teleoperation. MRCP can be defined as a platform composed of several robots that cooperate automatically to perform a teleoperated task, creating a robotic system with increased capacity (workspace volume, accessibility, dexterity ...). The cooperation is based on transferring the task between robots when necessary to enable a smooth task execution. The MRCP control evaluates the suitability of each robot to continue with the ongoing task and the optimal time to execute a task transfer between the current selected robot and the best candidate to continue with the task. From the operator¿s point of view, MRCP provides an interface that enables the teleoperation though the task-oriented paradigm: operator orders are translated into task actions instead of robot orders. This thesis is structured as follows: The first part is dedicated to review the current solutions in the teleoperation of complex tasks and compare them with those proposed in this research. The second part of the thesis presents and reviews in depth the different evaluation criteria to determine the suitability of each robot to continue with the execution of a task, considering the configuration of the robots and emphasizing the criterion of dexterity and manipulability. The study reviews the different required control algorithms to enable the task oriented telemanipulation. This proposed teleoperation paradigm is transparent to the operator. Then, the Thesis presents and analyses several experimental results using MRCP in the field of minimally invasive surgery. These experiments study the effectiveness of MRCP in various tasks requiring the cooperation of two hands. A type task is used: a suture using minimally invasive surgery technique. The analysis is done in terms of execution time, economy of movement, quality and patient safety (potential damage produced by undesired interaction between the tools and the vital tissues of the patient). The final part of the thesis proposes the implementation of different virtual aids and restrictions (guided teleoperation based on haptic visual and audio feedback, protection of restricted workspace regions, etc.) using the task oriented teleoperation paradigm. A framework is defined for implementing and applying a basic set of virtual aids and constraints within the framework of a virtual simulator for laparoscopic abdominal surgery. The set of experiments have allowed to validate the developed work. The study revealed the influence of virtual aids in the learning process of laparoscopic techniques. It has also demonstrated the improvement of learning curves, which paves the way for its implementation as a methodology for training new surgeons.Aquesta tesi doctoral proposa l'estudi i desenvolupament d'un sistema de teleoperació basat en la cooperació multi-robot sota el paradigma de la teleoperació orientada a tasca: Multi-Robot Cooperative Paradigm, MRCP. En la teleoperació clàssica, l'operador utilitza els telecomandaments perquè els braços robots reprodueixin els seus moviments i es realitzi la tasca desitjada. Amb el treball realitzat, l'operador pot manipular virtualment un objecte i és mitjançant el MRCP que s'adjudica a cada braç les ordres necessàries per realitzar la tasca, sense que l'operador hagi de resoldre les situacions derivades de possibles restriccions que puguin tenir els braços executors. La recerca desenvolupada està doncs orientada a millorar la teleoperació en tasques de precisió en entorns complexos i, en particular, en el camp de la cirurgia mínimament invasiva assistida per robots. Aquest camp imposa condicions de seguretat del pacient i l'espai de treball comporta moltes restriccions a la teleoperació. MRCP es pot definir com a una plataforma formada per diversos robots que cooperen de forma automàtica per dur a terme una tasca teleoperada, generant un sistema robòtic amb capacitats augmentades (volums de treball, accessibilitat, destresa,...). La cooperació es basa en transferir la tasca entre robots a partir de determinar quin és aquell que és més adequat per continuar amb la seva execució i el moment òptim per realitzar la transferència de la tasca entre el robot actiu i el millor candidat a continuar-la. Des del punt de vista de l'operari, MRCP ofereix una interfície de teleoperació que permet la realització de la teleoperació mitjançant el paradigma d'ordres orientades a la tasca: les ordres es tradueixen en accions sobre la tasca en comptes d'estar dirigides als robots. Aquesta tesi està estructurada de la següent manera: Primerament es fa una revisió de l'estat actual de les diverses solucions desenvolupades actualment en el camp de la teleoperació de tasques complexes, comparant-les amb les proposades en aquest treball de recerca. En el segon bloc de la tesi es presenten i s'analitzen a fons els diversos criteris per determinar la capacitat de cada robot per continuar l'execució d'una tasca, segons la configuració del conjunt de robots i fent especial èmfasi en el criteri de destresa i manipulabilitat. Seguint aquest estudi, es presenten els diferents processos de control emprats per tal d'assolir la telemanipulació orientada a tasca de forma transparent a l'operari. Seguidament es presenten diversos resultats experimentals aplicant MRCP al camp de la cirurgia mínimament invasiva. En aquests experiments s'estudia l'eficàcia de MRCP en diverses tasques que requereixen de la cooperació de dues mans. S'ha escollit una tasca tipus: sutura amb tècnica de cirurgia mínimament invasiva. L'anàlisi es fa en termes de temps d'execució, economia de moviment, qualitat i seguretat del pacient (potencials danys causats per la interacció no desitjada entre les eines i els teixits vitals del pacient). Finalment s'ha estudiat l'ús de diferents ajudes i restriccions virtuals (guiat de la teleoperació via retorn hàptic, visual o auditiu, protecció de regions de l'espai de treball, etc) dins el paradigma de teleoperació orientada a tasca. S'ha definint un marc d'aplicació base i implementant un conjunt de restriccions virtuals dins el marc d'un simulador de cirurgia laparoscòpia abdominal. El conjunt d'experiments realitzats han permès validar el treball realitzat. Aquest estudi ha permès determinar la influencia de les ajudes virtuals en el procés d'aprenentatge de les tècniques laparoscòpiques. S'ha evidenciat una millora en les corbes d'aprenentatge i obre el camí a la seva implantació com a metodologia d'entrenament de nous cirurgians.Postprint (published version

    Learn and Transfer Knowledge of Preferred Assistance Strategies in Semi-autonomous Telemanipulation

    Full text link
    Enabling robots to provide effective assistance yet still accommodating the operator's commands for telemanipulation of an object is very challenging because robot's assistive action is not always intuitive for human operators and human behaviors and preferences are sometimes ambiguous for the robot to interpret. Although various assistance approaches are being developed to improve the control quality from different optimization perspectives, the problem still remains in determining the appropriate approach that satisfies the fine motion constraints for the telemanipulation task and preference of the operator. To address these problems, we developed a novel preference-aware assistance knowledge learning approach. An assistance preference model learns what assistance is preferred by a human, and a stagewise model updating method ensures the learning stability while dealing with the ambiguity of human preference data. Such a preference-aware assistance knowledge enables a teleoperated robot hand to provide more active yet preferred assistance toward manipulation success. We also developed knowledge transfer methods to transfer the preference knowledge across different robot hand structures to avoid extensive robot-specific training. Experiments to telemanipulate a 3-finger hand and 2-finger hand, respectively, to use, move, and hand over a cup have been conducted. Results demonstrated that the methods enabled the robots to effectively learn the preference knowledge and allowed knowledge transfer between robots with less training effort

    Model-Augmented Haptic Telemanipulation: Concept, Retrospective Overview, and Current Use Cases

    Get PDF
    Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges

    INTELLIGENT TRANSDUCTION FOR RESPONSE SYNTHESIS IN TELEMANIPULATION

    Get PDF

    Motion space reduction in a haptic model of violin and viola bowing

    Get PDF
    A haptic model of bowing the violin and viola is presented that focuses on just the geometry of the contact point between the bow hair and the string, giving a simplified description that focuses on aspects which the performer thinks about consciously. The model allows artificial constraints on the bow motion to be provided, giving the player physical feedback if one dimension of the contact point becomes incorrect, while allowing full movement in other dimensions

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Evaluation of Haptic and Visual Cues for Repulsive or Attractive Guidance in Nonholonomic Steering Tasks.

    Get PDF
    Remote control of vehicles is a difficult task for operators. Support systems that present additional task information may assist operators, but their usefulness is expected to depend on several factors such as 1) the nature of conveyed information, 2) what modality it is conveyed through, and 3) the task difficulty. In an exploratory experiment, these three factors were manipulated to quantify their effects on operator behavior. Subjects ( n=15n = {{15}}) used a haptic manipulator to steer a virtual nonholonomic vehicle through abstract environments, in which obstacles needed to be avoided. Both a simple support conveying near-future predictions of the trajectory of the vehicle and a more elaborate support that continuously suggests the path to be taken were designed (factor 1). These types of information were offered either with visual or haptic cues (factor 2). These four support systems were tested in four different abstracted environments with decreasing amount of allowed variability in realized trajectories (factor 3). The results show improvements for the simple support only when this information was presented visually, but not when offered haptically. For the elaborate support, equally large improvements for both modalities were found. This suggests that the elaborate support is better: additional information is key in improving performance in nonholonomic steering tasks
    corecore