2,041 research outputs found

    Exploring glass as a novel method for hands-free data entry in flexible cystoscopy

    Get PDF
    We present a way to annotate cystoscopy finding on Google Glass in a reproducible and hands free manner for use by surgeons during operations in the sterile environment inspired by the current practice of hand-drawn sketches. We developed three data entry variants based on speech and head movements. We assessed the feasibility, benefits and drawbacks of the system with 8 surgeons and Foundation Doctors having up to 30 years' cystoscopy experience at a UK hospital in laboratory trials. We report data entry speed and error rate of input modalities and contrast it with the participants' feedback on their perception of usability, acceptance, and suitability for deployment. The results are supportive of new data entry technologies and point out directions for future improvement of eyewear computers. The findings can be generalised to other endoscopic procedures (e.g. OGD/laryngoscopy) and could be included within hospital IT in the future

    Augmented Reality Visualization for Image-Guided Surgery:A Validation Study Using a Three-Dimensional Printed Phantom

    Get PDF
    Background Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MM). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Results Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). Conclusion The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup. (C) 2021 The Author. Published by Elsevier Inc. on behalf of The American Association of Oral and Maxillofacial Surgeons

    Proof of Concept: Wearable Augmented Reality Video See-Through Display for Neuro-Endoscopy

    Get PDF
    In mini-invasive surgery and in endoscopic procedures, the surgeon operates without a direct visualization of the patient’s anatomy. In image-guided surgery, solutions based on wearable augmented reality (AR) represent the most promising ones. The authors describe the characteristics that an ideal Head Mounted Display (HMD) must have to guarantee safety and accuracy in AR-guided neurosurgical interventions and design the ideal virtual content for guiding crucial task in neuro endoscopic surgery. The selected sequence of AR content to obtain an effective guidance during surgery is tested in a Microsoft Hololens based app

    Gesture Recognition in Robotic Surgery with Multimodal Attention

    Get PDF
    Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor

    Haptic communication to support biopsy procedures learning in virtual environments

    Get PDF
    International audienceIn interventional radiology, physicians require high haptic sensitivity and fine motor skills development because of the limited real-time visual feedback of the surgical site. The transfer of this type of surgical skill to novices is a challenging issue. This paper presents a study on the design of a biopsy procedure learning system. Our methodology, based on a task-centered design approach, aims to bring out new design rules for virtual learning environments. A new collaborative haptic training paradigm is introduced to support human-haptic interaction in a virtual environment. The interaction paradigm supports haptic communication between two distant users to teach a surgical skill. In order to evaluate this paradigm, a user experiment was conducted. Sixty volunteer medical students participated in the study to assess the influence of the teaching method on their performance in a biopsy procedure task. The results show that to transfer the skills, the combination of haptic communication with verbal and visual communications improves the novices' performance compared to conventional teaching methods. Furthermore, the results show that, depending on the teaching method, participants developed different needle insertion profiles. We conclude that our interaction paradigm facilitates expert-novice haptic communication and improves skills transfer; and new skills acquisition depends on the availability of different communication channels between experts and novices. Our findings indicate that the traditional fellowship methods in surgery should evolve to an off-patient collaborative environment that will continue to support visual and verbal communication, but also haptic communication, in order to achieve a better and more complete skills training

    Haptic communication to support biopsy procedures learning in virtual environments

    Get PDF
    International audienceIn interventional radiology, physicians require high haptic sensitivity and fine motor skills development because of the limited real-time visual feedback of the surgical site. The transfer of this type of surgical skill to novices is a challenging issue. This paper presents a study on the design of a biopsy procedure learning system. Our methodology, based on a task-centered design approach, aims to bring out new design rules for virtual learning environments. A new collaborative haptic training paradigm is introduced to support human-haptic interaction in a virtual environment. The interaction paradigm supports haptic communication between two distant users to teach a surgical skill. In order to evaluate this paradigm, a user experiment was conducted. Sixty volunteer medical students participated in the study to assess the influence of the teaching method on their performance in a biopsy procedure task. The results show that to transfer the skills, the combination of haptic communication with verbal and visual communications improves the novices' performance compared to conventional teaching methods. Furthermore, the results show that, depending on the teaching method, participants developed different needle insertion profiles. We conclude that our interaction paradigm facilitates expert-novice haptic communication and improves skills transfer; and new skills acquisition depends on the availability of different communication channels between experts and novices. Our findings indicate that the traditional fellowship methods in surgery should evolve to an off-patient collaborative environment that will continue to support visual and verbal communication, but also haptic communication, in order to achieve a better and more complete skills training

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    Mobile Computing for Trauma and Surgical Care Continuous Education

    Get PDF
    In medical domain, mobile computing has proven to be convenient, effective, and productive. With varying screen sizes, there is a challenge to present the right information in the right format such that medical practitioners can access information quickly. In this thesis, we discuss how mobile computing can be used as a way of continuous education for medical practitioners in the field of trauma and surgical care, and provide design guidelines on how to effectively present information on different mobile form factors. The focus is on three screen sizes- 4.7, 7 and 10.1 in., and three interaction methods - dropdown, slide, and tab menu. Results indicate that medical practitioners preferred 7 in. device that enabled them to have information at a glance and aid them in surgical decision making. In addition, the tab menu was the most convenient, intuitive and attractive out of the three interaction methods
    • 

    corecore