375 research outputs found

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    Planning hand-arm grasping motions with human-like appearance

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksFinalista de l’IROS Best Application Paper Award a la 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, ICROS.This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing pick-and-place actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning ollowing the synergies. The introduced approach is tested in an application example and thoroughly compared with other state-of-the-art planning algorithms, obtaining better results.Peer ReviewedAward-winningPostprint (author's final draft

    Vision-Based Autonomous Control in Robotic Surgery

    Get PDF
    Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention. In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies

    Multi-point multi-hand haptic teleoperation of a mobile robot

    Full text link

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Multi-robot cooperative platform : a task-oriented teleoperation paradigm

    Get PDF
    This thesis proposes the study and development of a teleoperation system based on multi-robot cooperation under the task oriented teleoperation paradigm: Multi-Robot Cooperative Paradigm, MRCP. In standard teleoperation, the operator uses the master devices to control the remote slave robot arms. These arms reproduce the desired movements and perform the task. With the developed work, the operator can virtually manipulate an object. MRCP automatically generates the arms orders to perform the task. The operator does not have to solve situations arising from possible restrictions that the slave arms may have. The research carried out is therefore aimed at improving the accuracy teleoperation tasks in complex environments, particularly in the field of robot assisted minimally invasive surgery. This field requires patient safety and the workspace entails many restrictions to teleoperation. MRCP can be defined as a platform composed of several robots that cooperate automatically to perform a teleoperated task, creating a robotic system with increased capacity (workspace volume, accessibility, dexterity ...). The cooperation is based on transferring the task between robots when necessary to enable a smooth task execution. The MRCP control evaluates the suitability of each robot to continue with the ongoing task and the optimal time to execute a task transfer between the current selected robot and the best candidate to continue with the task. From the operator¿s point of view, MRCP provides an interface that enables the teleoperation though the task-oriented paradigm: operator orders are translated into task actions instead of robot orders. This thesis is structured as follows: The first part is dedicated to review the current solutions in the teleoperation of complex tasks and compare them with those proposed in this research. The second part of the thesis presents and reviews in depth the different evaluation criteria to determine the suitability of each robot to continue with the execution of a task, considering the configuration of the robots and emphasizing the criterion of dexterity and manipulability. The study reviews the different required control algorithms to enable the task oriented telemanipulation. This proposed teleoperation paradigm is transparent to the operator. Then, the Thesis presents and analyses several experimental results using MRCP in the field of minimally invasive surgery. These experiments study the effectiveness of MRCP in various tasks requiring the cooperation of two hands. A type task is used: a suture using minimally invasive surgery technique. The analysis is done in terms of execution time, economy of movement, quality and patient safety (potential damage produced by undesired interaction between the tools and the vital tissues of the patient). The final part of the thesis proposes the implementation of different virtual aids and restrictions (guided teleoperation based on haptic visual and audio feedback, protection of restricted workspace regions, etc.) using the task oriented teleoperation paradigm. A framework is defined for implementing and applying a basic set of virtual aids and constraints within the framework of a virtual simulator for laparoscopic abdominal surgery. The set of experiments have allowed to validate the developed work. The study revealed the influence of virtual aids in the learning process of laparoscopic techniques. It has also demonstrated the improvement of learning curves, which paves the way for its implementation as a methodology for training new surgeons.Aquesta tesi doctoral proposa l'estudi i desenvolupament d'un sistema de teleoperació basat en la cooperació multi-robot sota el paradigma de la teleoperació orientada a tasca: Multi-Robot Cooperative Paradigm, MRCP. En la teleoperació clàssica, l'operador utilitza els telecomandaments perquè els braços robots reprodueixin els seus moviments i es realitzi la tasca desitjada. Amb el treball realitzat, l'operador pot manipular virtualment un objecte i és mitjançant el MRCP que s'adjudica a cada braç les ordres necessàries per realitzar la tasca, sense que l'operador hagi de resoldre les situacions derivades de possibles restriccions que puguin tenir els braços executors. La recerca desenvolupada està doncs orientada a millorar la teleoperació en tasques de precisió en entorns complexos i, en particular, en el camp de la cirurgia mínimament invasiva assistida per robots. Aquest camp imposa condicions de seguretat del pacient i l'espai de treball comporta moltes restriccions a la teleoperació. MRCP es pot definir com a una plataforma formada per diversos robots que cooperen de forma automàtica per dur a terme una tasca teleoperada, generant un sistema robòtic amb capacitats augmentades (volums de treball, accessibilitat, destresa,...). La cooperació es basa en transferir la tasca entre robots a partir de determinar quin és aquell que és més adequat per continuar amb la seva execució i el moment òptim per realitzar la transferència de la tasca entre el robot actiu i el millor candidat a continuar-la. Des del punt de vista de l'operari, MRCP ofereix una interfície de teleoperació que permet la realització de la teleoperació mitjançant el paradigma d'ordres orientades a la tasca: les ordres es tradueixen en accions sobre la tasca en comptes d'estar dirigides als robots. Aquesta tesi està estructurada de la següent manera: Primerament es fa una revisió de l'estat actual de les diverses solucions desenvolupades actualment en el camp de la teleoperació de tasques complexes, comparant-les amb les proposades en aquest treball de recerca. En el segon bloc de la tesi es presenten i s'analitzen a fons els diversos criteris per determinar la capacitat de cada robot per continuar l'execució d'una tasca, segons la configuració del conjunt de robots i fent especial èmfasi en el criteri de destresa i manipulabilitat. Seguint aquest estudi, es presenten els diferents processos de control emprats per tal d'assolir la telemanipulació orientada a tasca de forma transparent a l'operari. Seguidament es presenten diversos resultats experimentals aplicant MRCP al camp de la cirurgia mínimament invasiva. En aquests experiments s'estudia l'eficàcia de MRCP en diverses tasques que requereixen de la cooperació de dues mans. S'ha escollit una tasca tipus: sutura amb tècnica de cirurgia mínimament invasiva. L'anàlisi es fa en termes de temps d'execució, economia de moviment, qualitat i seguretat del pacient (potencials danys causats per la interacció no desitjada entre les eines i els teixits vitals del pacient). Finalment s'ha estudiat l'ús de diferents ajudes i restriccions virtuals (guiat de la teleoperació via retorn hàptic, visual o auditiu, protecció de regions de l'espai de treball, etc) dins el paradigma de teleoperació orientada a tasca. S'ha definint un marc d'aplicació base i implementant un conjunt de restriccions virtuals dins el marc d'un simulador de cirurgia laparoscòpia abdominal. El conjunt d'experiments realitzats han permès validar el treball realitzat. Aquest estudi ha permès determinar la influencia de les ajudes virtuals en el procés d'aprenentatge de les tècniques laparoscòpiques. S'ha evidenciat una millora en les corbes d'aprenentatge i obre el camí a la seva implantació com a metodologia d'entrenament de nous cirurgians.Postprint (published version

    Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study

    Get PDF
    © 2019, The Author(s). This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies
    corecore