844 research outputs found

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    sCAM: An Untethered Insertable Laparoscopic Surgical Camera Robot

    Get PDF
    Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational interference but also reduce camera mobility. Meanwhile, these insertable laparoscopic cameras are manipulated without any pose information or haptic feedback, which results in open loop motion control and raises concerns about surgical safety caused by inappropriate use of force.This dissertation proposes, implements, and validates an untethered insertable laparoscopic surgical camera (sCAM) robot. Contributions presented in this work include: (1) feasibility of an untethered fully insertable laparoscopic surgical camera, (2) camera-tissue interaction characterization and force sensing, (3) pose estimation, visualization, and feedback with sCAM, and (4) robotic-assisted closed-loop laparoscopic camera control. Borrowing the principle of spherical motors, camera anchoring and actuation are achieved through transabdominal magnetic coupling in a stator-rotor manner. To avoid the tethering wires, laparoscopic vision and control communication are realized with dedicated wireless links based on onboard power. A non-invasive indirect approach is proposed to provide real-time camera-tissue interaction force measurement, which, assisted by camera-tissue interaction modeling, predicts stress distribution over the tissue surface. Meanwhile, the camera pose is remotely estimated and visualized using complementary filtering based on onboard motion sensing. Facilitated by the force measurement and pose estimation, robotic-assisted closed-loop control has been realized in a double-loop control scheme with shared autonomy between surgeons and the robotic controller.The sCAM has brought robotic laparoscopic imaging one step further toward less invasiveness and more dexterity. Initial ex vivo test results have verified functions of the implemented sCAM design and the proposed force measurement and pose estimation approaches, demonstrating the technical feasibility of a tetherless insertable laparoscopic camera. Robotic-assisted control has shown its potential to free surgeons from low-level intricate camera manipulation workload and improve precision and intuitiveness in laparoscopic imaging

    Autonomous tissue retraction with a biomechanically informed logic based framework

    Get PDF
    Autonomy in robot-assisted surgery is essential to reduce surgeons\u2019 cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available

    Smart Camera Robotic Assistant for Laparoscopic Surgery

    Get PDF
    The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do. The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited. This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area. On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon

    Robotic manipulators for single access surgery

    Get PDF
    This thesis explores the development of cooperative robotic manipulators for enhancing surgical precision and patient outcomes in single-access surgery and, specifically, Transanal Endoscopic Microsurgery (TEM). During these procedures, surgeons manipulate a heavy set of instruments via a mechanical clamp inserted in the patient’s body through a surgical port, resulting in imprecise movements, increased patient risks, and increased operating time. Therefore, an articulated robotic manipulator with passive joints is initially introduced, featuring built-in position and force sensors in each joint and electronic joint brakes for instant lock/release capability. The articulated manipulator concept is further improved with motorised joints, evolving into an active tool holder. The joints allow the incorporation of advanced robotic capabilities such as ultra-lightweight gravity compensation and hands-on kinematic reconfiguration, which can optimise the placement of the tool holder in the operating theatre. Due to the enhanced sensing capabilities, the application of the active robotic manipulator was further explored in conjunction with advanced image guidance approaches such as endomicroscopy. Recent advances in probe-based optical imaging such as confocal endomicroscopy is making inroads in clinical uses. However, the challenging manipulation of imaging probes hinders their practical adoption. Therefore, a combination of the fully cooperative robotic manipulator with a high-speed scanning endomicroscopy instrument is presented, simplifying the incorporation of optical biopsy techniques in routine surgical workflows. Finally, another embodiment of a cooperative robotic manipulator is presented as an input interface to control a highly-articulated robotic instrument for TEM. This master-slave interface alleviates the drawbacks of traditional master-slave devices, e.g., using clutching mechanics to compensate for the mismatch between slave and master workspaces, and the lack of intuitive manipulation feedback, e.g. joint limits, to the user. To address those drawbacks a joint-space robotic manipulator is proposed emulating the kinematic structure of the flexible robotic instrument under control.Open Acces

    Gesture Recognition in Robotic Surgery: a Review

    Get PDF
    OBJECTIVE: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE: This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field

    Autonomous tissue retraction with a biomechanically informed logic based framework

    Get PDF
    Autonomy in robot-assisted surgery is essential to reduce surgeons’ cognitive load and eventually improve the overall surgical outcome. A key requirement for autonomy in a safety-critical scenario as surgery lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the Autonomous Robotic Surgical System (ARSS) must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present a modular Framework for Robot-Assisted Surgery (FRAS) in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a biomechanical simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available

    Optically Sensorized Tendons for Articulate Robotic Needles

    Get PDF
    This study proposes an optically sensorized tendon composed of a 195 µm diameter, high strength, polarization maintaining (PM) fiber Bragg gratings (FBG) optical fiber which resolves the cross-sensitivity issue of conventional FBGs. The bare fiber tendon is locally reinforced with a 250 µm diameter Kevlar bundle enhancing the level of force transmission and enabling high curvature tendon routing. The performance of the sensorized tendons is explored in terms of strength (higher than 13N for the bare PM-FBG fiber tendon, up to 40N for the Kevlar-reinforced tendon under tensile loading), strain sensitivity (0.127 percent strain per newton for the bare PM-FBG fiber tendon, 0.04 percent strain per newton for the Kevlar-reinforced tendon), temperature stability, and friction-independent sensing behavior. Subsequently, the tendon is instrumented within an 18 Ga articulate NiTi cannula and evaluated under static and dynamic loading conditions, and within phantoms of varying stiffness for tissue-stiffness estimation. The results from this series of experiments serve to validate the effectiveness of the proposed tendon as a bi-modal sensing and actuation component for robot-assisted minimally invasive surgical instruments

    Automatic extraction of robotic surgery actions from text and kinematic data

    Get PDF
    The latest generation of robotic systems is becoming increasingly autonomous due to technological advancements and artificial intelligence. The medical field, particularly surgery, is also interested in these technologies because automation would benefit surgeons and patients. While the research community is active in this direction, commercial surgical robots do not currently operate autonomously due to the risks involved in dealing with human patients: it is still considered safer to rely on human surgeons' intelligence for decision-making issues. This means that robots must possess human-like intelligence, including various reasoning capabilities and extensive knowledge, to become more autonomous and credible. As demonstrated by current research in the field, indeed, one of the most critical aspects in developing autonomous systems is the acquisition and management of knowledge. In particular, a surgical robot must base its actions on solid procedural surgical knowledge to operate autonomously, safely, and expertly. This thesis investigates different possibilities for automatically extracting and managing knowledge from text and kinematic data. In the first part, we investigated the possibility of extracting procedural surgical knowledge from real intervention descriptions available in textbooks and academic papers on the robotic-surgical domains, by exploiting Transformer-based pre-trained language models. In particular, we released SurgicBERTa, a RoBERTa-based pre-trained language model for surgical literature understanding. It has been used to detect procedural sentences in books and extract procedural elements from them. Then, with some use cases, we explored the possibilities of translating written instructions into logical rules usable for robotic planning. Since not all the knowledge required for automatizing a procedure is written in texts, we introduce the concept of surgical commonsense, showing how it relates to different autonomy levels. In the second part of the thesis, we analyzed surgical procedures from a lower granularity level, showing how each surgical gesture is associated with a given combination of kinematic data
    • …
    corecore