89 research outputs found

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    INTERFACE DESIGN FOR A VIRTUAL REALITY-ENHANCED IMAGE-GUIDED SURGERY PLATFORM USING SURGEON-CONTROLLED VIEWING TECHNIQUES

    Get PDF
    Initiative has been taken to develop a VR-guided cardiac interface that will display and deliver information without affecting the surgeons’ natural workflow while yielding better accuracy and task completion time than the existing setup. This paper discusses the design process, the development of comparable user interface prototypes as well as an evaluation methodology that can measure user performance and workload for each of the suggested display concepts. User-based studies and expert recommendations are used in conjunction to es­ tablish design guidelines for our VR-guided surgical platform. As a result, a better understanding of autonomous view control, depth display, and use of virtual context, is attained. In addition, three proposed interfaces have been developed to allow a surgeon to control the view of the virtual environment intra-operatively. Comparative evaluation of the three implemented interface prototypes in a simulated surgical task scenario, revealed performance advantages for stereoscopic and monoscopic biplanar display conditions, as well as the differences between three types of control modalities. One particular interface prototype demonstrated significant improvement in task performance. Design recommendations are made for this interface as well as the others as we prepare for prospective development iterations

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    Kidney targeting and puncturing during percutaneous nephrolithotomy: recent advances and future perspectives

    Get PDF
    Background and Purpose: Precise needle puncture of the kidney is a challenging and essential step for successful percutaneous nephrolithotomy (PCNL). Many devices and surgical techniques have been developed to easily achieve suitable renal access. This article presents a critical review to address the methodologies and techniques for conducting kidney targeting and the puncture step during PCNL. Based on this study, research paths are also provided for PCNL procedure improvement. METHODS: Most relevant works concerning PCNL puncture were identified by a search of Medline/PubMed, ISI Web of Science, and Scopus databases from 2007 to December 2012. Two authors independently reviewed the studies. RESULTS: A total of 911 abstracts and 346 full-text articles were assessed and discussed; 52 were included in this review as a summary of the main contributions to kidney targeting and puncturing. CONCLUSIONS: Multiple paths and technologic advances have been proposed in the field of urology and minimally invasive surgery to improve PCNL puncture. The most relevant contributions, however, have been provided by the application of medical imaging guidance, new surgical tools, motion tracking systems, robotics, and image processing and computer graphics. Despite the multiple research paths for PCNL puncture guidance, no widely acceptable solution has yet been reached, and it remains an active and challenging research field. Future developments should focus on real-time methods, robust and accurate algorithms, and radiation free imaging techniques.The authors acknowledge Foundation for Science and Technology (FCT) for the fellowships references: SFRH/BPD/46851/2008 and SFRH/BD/74276/2010

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Visual Perception and Cognition in Image-Guided Intervention

    Get PDF
    Surgical image visualization and interaction systems can dramatically affect the efficacy and efficiency of surgical training, planning, and interventions. This is even more profound in the case of minimally-invasive surgery where restricted access to the operative field in conjunction with limited field of view necessitate a visualization medium to provide patient-specific information at any given moment. Unfortunately, little research has been devoted to studying human factors associated with medical image displays and the need for a robust, intuitive visualization and interaction interfaces has remained largely unfulfilled to this day. Failure to engineer efficient medical solutions and design intuitive visualization interfaces is argued to be one of the major barriers to the meaningful transfer of innovative technology to the operating room. This thesis was, therefore, motivated by the need to study various cognitive and perceptual aspects of human factors in surgical image visualization systems, to increase the efficiency and effectiveness of medical interfaces, and ultimately to improve patient outcomes. To this end, we chose four different minimally-invasive interventions in the realm of surgical training, planning, training for planning, and navigation: The first chapter involves the use of stereoendoscopes to reduce morbidity in endoscopic third ventriculostomy. The results of this study suggest that, compared with conventional endoscopes, the detection of the basilar artery on the surface of the third ventricle can be facilitated with the use of stereoendoscopes, increasing the safety of targeting in third ventriculostomy procedures. In the second chapter, a contour enhancement technique is described to improve preoperative planning of arteriovenous malformation interventions. The proposed method, particularly when combined with stereopsis, is shown to increase the speed and accuracy of understanding the spatial relationship between vascular structures. In the third chapter, an augmented-reality system is proposed to facilitate the training of planning brain tumour resection. The results of our user study indicate that the proposed system improves subjects\u27 performance, particularly novices\u27, in formulating the optimal point of entry and surgical path independent of the sensorimotor tasks performed. In the last chapter, the role of fully-immersive simulation environments on the surgeons\u27 non-technical skills to perform vertebroplasty procedure is investigated. Our results suggest that while training surgeons may increase their technical skills, the introduction of crisis scenarios significantly disturbs the performance, emphasizing the need of realistic simulation environments as part of training curriculum

    Mixed Reality in Modern Surgical and Interventional Practice: Narrative Review of the Literature

    Full text link
    BACKGROUND Mixed reality (MR) and its potential applications have gained increasing interest within the medical community over the recent years. The ability to integrate virtual objects into a real-world environment within a single video-see-through display is a topic that sparks imagination. Given these characteristics, MR could facilitate preoperative and preinterventional planning, provide intraoperative and intrainterventional guidance, and aid in education and training, thereby improving the skills and merits of surgeons and residents alike. OBJECTIVE In this narrative review, we provide a broad overview of the different applications of MR within the entire spectrum of surgical and interventional practice and elucidate on potential future directions. METHODS A targeted literature search within the PubMed, Embase, and Cochrane databases was performed regarding the application of MR within surgical and interventional practice. Studies were included if they met the criteria for technological readiness level 5, and as such, had to be validated in a relevant environment. RESULTS A total of 57 studies were included and divided into studies regarding preoperative and interventional planning, intraoperative and interventional guidance, as well as training and education. CONCLUSIONS The overall experience with MR is positive. The main benefits of MR seem to be related to improved efficiency. Limitations primarily seem to be related to constraints associated with head-mounted display. Future directions should be aimed at improving head-mounted display technology as well as incorporation of MR within surgical microscopes, robots, and design of trials to prove superiority
    • …
    corecore