194 research outputs found

    TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training

    Full text link
    Microsurgery involves the dexterous manipulation of delicate tissue or fragile structures such as small blood vessels, nerves, etc., under a microscope. To address the limitation of imprecise manipulation of human hands, robotic systems have been developed to assist surgeons in performing complex microsurgical tasks with greater precision and safety. However, the steep learning curve for robot-assisted microsurgery (RAMS) and the shortage of well-trained surgeons pose significant challenges to the widespread adoption of RAMS. Therefore, the development of a versatile training system for RAMS is necessary, which can bring tangible benefits to both surgeons and patients. In this paper, we present a Tactile Internet-Based Micromanipulation System (TIMS) based on a ROS-Django web-based architecture for microsurgical training. This system can provide tactile feedback to operators via a wearable tactile display (WTD), while real-time data is transmitted through the internet via a ROS-Django framework. In addition, TIMS integrates haptic guidance to `guide' the trainees to follow a desired trajectory provided by expert surgeons. Learning from demonstration based on Gaussian Process Regression (GPR) was used to generate the desired trajectory. User studies were also conducted to verify the effectiveness of our proposed TIMS, comparing users' performance with and without tactile feedback and/or haptic guidance.Comment: 8 pages, 7 figures. For more details of this project, please view our website: https://sites.google.com/view/viewtims/hom

    A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove

    Get PDF
    Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network’s decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorzsed surgical glove contains valuable information regarding surgical skill

    Measures of Performance and Proficiency in Robotic-Assisted Surgery : A Systematic Review

    Get PDF
    The first author received a research grant from RCS England and Health Education England in November 2021 until present to complete the study.Peer reviewedPostprin

    One-shot domain adaptation in video-based assessment of surgical skills

    Full text link
    Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, DL models are data-hungry and restricted to their training domain. This prevents them from transitioning to new tasks where data is limited. Hence, domain adaptation is crucial to implement DL in real life. Here, we propose a meta-learning model, A-VBANet, that can deliver domain-agnostic surgical skill classification via one-shot learning. We develop the A-VBANet on five laparoscopic and robotic surgical simulators. Additionally, we test it on operating room (OR) videos of laparoscopic cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for laparoscopic cholecystectomy. For the first time, we provide a domain-agnostic procedure for video-based assessment of surgical skills. A significant implication of this approach is that it allows the use of data from surgical simulators to assess performance in the operating room.Comment: 12 pages (+9 pages of Supplementary Materials), 4 figures (+2 Supplementary Figures), 2 tables (+5 Supplementary Tables

    From teleoperation to autonomous robot-assisted microsurgery: A survey

    Get PDF
    Robot-assisted microsurgery (RAMS) has many benefits compared to traditional microsurgery. Microsurgical platforms with advanced control strategies, high-quality micro-imaging modalities and micro-sensing systems are worth developing to further enhance the clinical outcomes of RAMS. Within only a few decades, microsurgical robotics has evolved into a rapidly developing research field with increasing attention all over the world. Despite the appreciated benefits, significant challenges remain to be solved. In this review paper, the emerging concepts and achievements of RAMS will be presented. We introduce the development tendency of RAMS from teleoperation to autonomous systems. We highlight the upcoming new research opportunities that require joint efforts from both clinicians and engineers to pursue further outcomes for RAMS in years to come

    Motor learning induced neuroplasticity in minimally invasive surgery

    Get PDF
    Technical skills in surgery have become more complex and challenging to acquire since the introduction of technological aids, particularly in the arena of Minimally Invasive Surgery. Additional challenges posed by reforms to surgical careers and increased public scrutiny, have propelled identification of methods to assess and acquire MIS technical skills. Although validated objective assessments have been developed to assess motor skills requisite for MIS, they poorly understand the development of expertise. Motor skills learning, is indirectly observable, an internal process leading to relative permanent changes in the central nervous system. Advances in functional neuroimaging permit direct interrogation of evolving patterns of brain function associated with motor learning due to the property of neuroplasticity and has been used on surgeons to identify the neural correlates for technical skills acquisition and the impact of new technology. However significant gaps exist in understanding neuroplasticity underlying learning complex bimanual MIS skills. In this thesis the available evidence on applying functional neuroimaging towards assessment and enhancing operative performance in the field of surgery has been synthesized. The purpose of this thesis was to evaluate frontal lobe neuroplasticity associated with learning a complex bimanual MIS skill using functional near-infrared spectroscopy an indirect neuroimaging technique. Laparoscopic suturing and knot-tying a technically challenging bimanual skill is selected to demonstrate learning related reorganisation of cortical behaviour within the frontal lobe by shifts in activation from the prefrontal cortex (PFC) subserving attention to primary and secondary motor centres (premotor cortex, supplementary motor area and primary motor cortex) in which motor sequences are encoded and executed. In the cross-sectional study, participants of varying expertise demonstrate frontal lobe neuroplasticity commensurate with motor learning. The longitudinal study involves tracking evolution in cortical behaviour of novices in response to receipt of eight hours distributed training over a fortnight. Despite novices achieving expert like performance and stabilisation on the technical task, this study demonstrates that novices displayed persistent PFC activity. This study establishes for complex bimanual tasks, that improvements in technical performance do not accompany a reduced reliance in attention to support performance. Finally, least-squares support vector machine is used to classify expertise based on frontal lobe functional connectivity. Findings of this thesis demonstrate the value of interrogating cortical behaviour towards assessing MIS skills development and credentialing.Open Acces

    Surgical Skill Assessment Automation Based on Sparse Optical Flow Data

    Get PDF
    Objective skill assessment based personal feedback is a vital part of surgical training. Automated assessment solutions aim to replace traditional manual (experts’ opinion-based) assessment techniques, that predominantly requires the most valuable time commitment from senior surgeons. Typically, either kinematic or visual input data can be employed to perform skill assessment. Minimally Invasive Surgery (MIS) benefits the patients by using smaller incisions than open surgery, resulting in less pain and quicker recovery, but increasing the difficulty of the surgical task manyfold. Robot-Assisted Minimally Invasive Surgery (RAMIS) offers higher precision during surgery, while also improving the ergonomics for the performing surgeons. Kinematic data have been proven to directly correlate with the expertise of surgeons performing RAMIS procedures, but for traditional MIS it is not readily available. Visual feature-based solutions are slowly catching up to the efficacy of kinematics-based solutions, but the best performing methods usually depend on 3D visual features, which require stereo cameras and calibration data, neither of which are available in MIS. This paper introduces a general 2D image-based solution that can enable the creation and application of surgical skill assessment solutions in any training environment. A well-established kinematics-based skill assessment benchmark’s feature extraction techniques have been repurposed to evaluate the accuracy that the generated data can produce. We reached individual accuracy up to 95.74% and mean accuracy – averaged over 5 cross-validation trials – up to 83.54%. Additional related resources such as the source codes, result and data files are publicly available on Github (https://github.com/ABC-iRobotics/VisDataSurgicalSkill)

    Robotic Scene Segmentation with Memory Network for Runtime Surgical Context Inference

    Full text link
    Surgical context inference has recently garnered significant attention in robot-assisted surgery as it can facilitate workflow analysis, skill assessment, and error detection. However, runtime context inference is challenging since it requires timely and accurate detection of the interactions among the tools and objects in the surgical scene based on the segmentation of video data. On the other hand, existing state-of-the-art video segmentation methods are often biased against infrequent classes and fail to provide temporal consistency for segmented masks. This can negatively impact the context inference and accurate detection of critical states. In this study, we propose a solution to these challenges using a Space Time Correspondence Network (STCN). STCN is a memory network that performs binary segmentation and minimizes the effects of class imbalance. The use of a memory bank in STCN allows for the utilization of past image and segmentation information, thereby ensuring consistency of the masks. Our experiments using the publicly available JIGSAWS dataset demonstrate that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread, and improves context inference compared to the state-of-the-art. We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance.Comment: accepted at The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 202
    corecore