536 research outputs found

    Can mixed reality technologies teach surgical skills better than traditional methods? A prospective randomised feasibility study

    Get PDF
    Background Basic surgical skills teaching is often delivered with didactic audio-visual content, and new digital technologies may allow more engaging and effective ways of teaching to be developed. The Microsoft HoloLens 2 (HL2) is a multi-functional mixed reality headset. This prospective feasibility study sought to assess the device as a tool for enhancing technical surgical skills training. Methods A prospective randomised feasibility study was conducted. 36 novice medical students were trained to perform a basic arteriotomy and closure using a synthetic model. Participants were randomised to receive a structured surgical skills tutorial via a bespoke mixed reality HL2 tutorial (n = 18), or via a standard video-based tutorial (n = 18). Proficiency scores were assessed by blinded examiners using a validated objective scoring system and participant feedback collected. Results The HL2 group showed significantly greater improvement in overall technical proficiency compared to the video group (10.1 vs. 6.89, p = 0.0076), and a greater consistency in skill progression with a significantly narrower range of scores (SD 2.48 vs. 4.03, p = 0.026). Participant feedback showed the HL2 technology to be more interactive and engaging with minimal device related problems experienced. Conclusions This study has demonstrated that mixed reality technology may provide a higher quality educational experience, improved skill progression and greater consistency in learning when compared to traditional teaching methodologies for basic surgical skills. Further work is required to refine, translate, and evaluate the scalability and applicability of the technology across a broad range of skills-based disciplines

    Gesture Recognition in Robotic Surgery: a Review

    Get PDF
    OBJECTIVE: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS: A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION: The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE: This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field

    Artificial intelligence and automation in endoscopy and surgery

    Get PDF
    Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient’s anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Development of virtual ophthalmic surgical skills training

    Get PDF
    Background: This study aims to assess whether ophthalmic surgical skills can be taught successfully online to a diverse international and interprofessional student group. Methods: Mixed methods study involving 20 students and 5 instructors. Each student completed a pre-session and post-session questionnaire to assess their perceptions regarding online instruction. Changes in questionnaire responses were analysed using Wilcoxon signed rank (SPSS 25). Semi-structured interviews were conducted to assess instructor perceptions towards virtual surgical skills teaching. Thematic analysis was undertaken using NVivo 12.0 software. Results: There was a 100% completion rate of pre- and post-session questionnaires. Prior to the session, lack of instructor supervision and inability to provide constructive feedback were emergent themes from students. Pre-session concerns regarding online delivery: 40% of students thought their view of skills demonstration would be negatively impacted, 60% their level of supervision and 55% their interaction with instructors. Following the session 10%, 15% and 5% held this view respectively. All students were ‘satisfied’ or ‘very satisfied’ regarding the ‘Surgeon’s View’ camera angle as well as the use of breakout rooms. 75% perceived an improvement in their confidence in instrument handling, 80% in cable knot tying and 70% in suture tying. Overall student rating for the virtual surgical skills session was 8.85 (±1.19) out of 10 (10 being most satisfied). Conclusions: We demonstrate that successful delivery of a virtual ophthalmic surgical skills course is feasible. We were able to widen accessibility and participation through virtual delivery, which has future implications for ophthalmic surgical teaching and its reach

    Constrained Soft Tissue Simulation for Virtual Surgical Simulation

    Get PDF
    yesMost of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinearmaterial characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss–Seidel iterative process. Secondly, as a Gauss–Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details . Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller
    • …
    corecore