1,852 research outputs found

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation

    Get PDF
    While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart. This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision. Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models. Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart

    Augmented Reality in Minimally Invasive Surgery

    Get PDF
    In the last 15 years Minimally Invasive Surgery, with techniques such as laparoscopy or endoscopy, has become very important and research in this field is increasing since these techniques provide the surgeons with less invasive means of reaching the patient’s internal anatomy and allow for entire procedures to be performed with only minimal trauma to the patient. The advantages of the use of this surgical method are evident for patients because the possible trauma is reduced, postoperative recovery is generally faster and there is less scarring. Despite the improvement in outcomes, indirect access to the operation area causes restricted vision, difficulty in hand-eye coordination, limited mobility handling instruments, two-dimensional imagery with a lack of detailed information and a limited visual field during the whole operation. The use of the emerging Augmented Reality technology shows the way forward by bringing the advantages of direct visualization (which you have in open surgery) back to minimally invasive surgery and increasing the physician's view of his surroundings with information gathered from patient medical images. Augmented Reality can avoid some drawbacks of Minimally Invasive Surgery and can provide opportunities for new medical treatments. After two decades of research into medical Augmented Reality, this technology is now advanced enough to meet the basic requirements for a large number of medical applications and it is feasible that medical AR applications will be accepted by physicians in order to evaluate their use and integration into the clinical workflow. Before seeing the systematic use of these technologies as support for minimally invasive surgery some improvements are still necessary in order to fully satisfy the requirements of operating physicians

    Preparing for the future of cardiothoracic surgery with virtual reality simulation and surgical planning:a narrative review

    Get PDF
    Background and Objective: Virtual reality (VR) technology in cardiothoracic surgery has been an area of interest for almost three decades, but computational limitations had restricted its implementation. Recent advances in computing power have facilitated the creation of high-fidelity VR simulations and anatomy visualisation tools. We undertook a non-systematic narrative review of literature on VR simulations and preoperative planning tools in cardiothoracic surgery and present the state-of-the-art, and a future outlook. Methods: A comprehensive search through MEDLINE database was performed in November 2022 for all publications that describe the use of VR in cardiothoracic surgery regarding training purposes, education, simulation, and procedural planning. We excluded papers that were not in English or Dutch, and that used two-dimensional (2D) screens, augmented, and simulated reality. Key Content and Findings: Results were categorised as simulators and preoperative planning tools. Current surgical simulators include the lobectomy module in the LapSim for video assisted thorascopic surgery which has been extensively validated, and the more recent robotic assisted lobectomy simulators from Robotix Mentor and Da Vinci SimNow, which are increasingly becoming integrated into the robotic surgery curriculum. Other perioperative simulators include the CardioPulmonary VR Resuscitation simulator for advanced life support after cardiac surgery, and the VR Extracorporeal Circulation (ECC) simulator for perfusionists to simulate the use of a heart-lung machine (HLM). For surgical planning, there are many small-scale tools available, and many case/pilot studies have been published utilising the visualisation possibilities provided by VR, including congenital cardiac, congenital thoracic, adult cardiac, and adult thoracic diseases. Conclusions: There are many promising tools becoming available to leverage the immersive power of VR in cardiothoracic surgery. The path to validate these simulators is well described, but large-scale trials producing high-level evidence for their efficacy are absent as of yet. Our view is that these tools will become increasingly integral parts of daily practice in this field in the coming decade.</p

    Image-guided port placement for minimally invasive cardiac surgery

    Get PDF
    Minimally invasive surgery is becoming popular for a number of interventions. Use of robotic surgical systems in coronary artery bypass intervention offers many benefits to patients, but is however limited by remaining challenges in port placement. Choosing the entry ports for the robotic tools has a large impact on the outcome of the surgery, and can be assisted by pre-operative planning and intra-operative guidance techniques. In this thesis, pre-operative 3D computed tomography (CT) imaging is used to plan minimally invasive robotic coronary artery bypass (MIRCAB) surgery. From a patient database, port placement optimization routines are implemented and validated. Computed port placement configurations approximated past expert chosen configurations with an error of 13.7 ±5.1 mm. Following optimization, statistical classification was used to assess patient candidacy for MIRCAB. Various pattern recognition techniques were used to predict MIRCAB success, and could be used in the future to reduce conversion rates to conventional open-chest surgery. Gaussian, Parzen window, and nearest neighbour classifiers all proved able to detect ‘candidate’ and ‘non-candidate’ MIRCAB patients. Intra-operative registration and laser projection of port placements was validated on a phantom and then evaluated in four patient cases. An image-guided laser projection system was developed to map port placement plans from pre-operative 3D images. Port placement mappings on the phantom setup were accurate with an error of 2.4 ± 0.4 mm. In the patient cases, projections remained within 1 cm of computed port positions. Misregistered port placement mappings in human trials were due mainly to the rigid-body registration assumption and can be improved by non-rigid techniques. Overall, this work presents an integrated approach for: 1) pre-operative port placement planning and classification of incoming MIRCAB patients; and 2) intra-operative guidance of port placement. Effective translation of these techniques to the clinic will enable MIRCAB as a more efficacious and accessible procedure

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System
    • …
    corecore