504 research outputs found

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    PWD-3DNet: A deep learning-based fully-automated segmentation of multiple structures on temporal bone CT scans

    Get PDF
    The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional anatomy. Segmentation of intra-temporal anatomy based on computed tomography (CT) images is necessary for applications such as surgical training and rehearsal, amongst others. However, temporal bone segmentation is challenging due to the similar intensities and complicated anatomical relationships among crit- ical structures, undetectable small structures on standard clinical CT, and the amount of time required for manual segmentation. This paper describes a single multi-class deep learning-based pipeline as the first fully automated algorithm for segmenting multiple temporal bone structures from CT volumes, including the sigmoid sinus, facial nerve, inner ear, malleus, incus, stapes, internal carotid artery and internal auditory canal. The proposed fully convolutional network, PWD-3DNet, is a patch-wise densely connected (PWD) three-dimensional (3D) network. The accuracy and speed of the proposed algorithm was shown to surpass current manual and semi-automated segmentation techniques. The experimental results yielded significantly high Dice similar- ity scores and low Hausdorff distances for all temporal bone structures with an average of 86% and 0.755 millimeter (mm), respectively. We illustrated that overlapping in the inference sub-volumes improves the segmentation performance. Moreover, we proposed augmentation layers by using samples with various transformations and image artefacts to increase the robustness of PWD-3DNet against image acquisition protocols, such as smoothing caused by soft tissue scanner settings and larger voxel sizes used for radiation reduction. The proposed algorithm was tested on low-resolution CTs acquired by another center with different scanner parameters than the ones used to create the algorithm and shows potential for application beyond the particular training data used in the study

    PWD-3DNet: A Deep Learning-Based Fully-Automated Segmentation of Multiple Structures on Temporal Bone CT Scans

    Get PDF
    The temporal bone is a part of the lateral skull surface that contains organs responsible for hearing and balance. Mastering surgery of the temporal bone is challenging because of this complex and microscopic three-dimensional anatomy. Segmentation of intra-temporal anatomy based on computed tomography (CT) images is necessary for applications such as surgical training and rehearsal, amongst others. However, temporal bone segmentation is challenging due to the similar intensities and complicated anatomical relationships among critical structures, undetectable small structures on standard clinical CT, and the amount of time required for manual segmentation. This paper describes a single multi-class deep learning-based pipeline as the first fully automated algorithm for segmenting multiple temporal bone structures from CT volumes, including the sigmoid sinus, facial nerve, inner ear, malleus, incus, stapes, internal carotid artery and internal auditory canal. The proposed fully convolutional network, PWD-3DNet, is a patch-wise densely connected (PWD) three-dimensional (3D) network. The accuracy and speed of the proposed algorithm was shown to surpass current manual and semi-automated segmentation techniques. The experimental results yielded significantly high Dice similarity scores and low Hausdorff distances for all temporal bone structures with an average of 86% and 0.755 millimeter (mm), respectively. We illustrated that overlapping in the inference sub-volumes improves the segmentation performance. Moreover, we proposed augmentation layers by using samples with various transformations and image artefacts to increase the robustness of PWD-3DNet against image acquisition protocols, such as smoothing caused by soft tissue scanner settings and larger voxel sizes used for radiation reduction. The proposed algorithm was tested on low-resolution CTs acquired by another center with different scanner parameters than the ones used to create the algorithm and shows potential for application beyond the particular training data used in the study

    Enhancing Registration for Image-Guided Neurosurgery

    Get PDF
    Pharmacologically refractive temporal lobe epilepsy and malignant glioma brain tumours are examples of pathologies that are clinically managed through neurosurgical intervention. The aims of neurosurgery are, where possible, to perform a resection of the surgical target while minimising morbidity to critical structures in the vicinity of the resected brain area. Image-guidance technology aims to assist this task by displaying a model of brain anatomy to the surgical team, which may include an overlay of surgical planning information derived from preoperative scanning such as the segmented resection target and nearby critical brain structures. Accurate neuronavigation is hindered by brain shift, the complex and non-rigid deformation of the brain that arises during surgery, which invalidates assumed rigid geometric correspondence between the neuronavigation model and the true shifted positions of relevant brain areas. Imaging using an interventional MRI (iMRI) scanner in a next-generation operating room can serve as a reference for intraoperative updates of the neuronavigation. An established clinical image processing workflow for iMRI-based guidance involves the correction of relevant imaging artefacts and the estimation of deformation due to brain shift based on non-rigid registration. The present thesis introduces two refinements aimed at enhancing the accuracy and reliability of iMRI-based guidance. A method is presented for the correction of magnetic susceptibility artefacts, which affect diffusion and functional MRI datasets, based on simulating magnetic field variation in the head from structural iMRI scans. Next, a method is presented for estimating brain shift using discrete non-rigid registration and a novel local similarity measure equipped with an edge-preserving property which is shown to improve the accuracy of the estimated deformation in the vicinity of the resected area for a number of cases of surgery performed for the management of temporal lobe epilepsy and glioma

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System

    Detection and 3D Localization of Surgical Instruments for Image-Guided Surgery

    Get PDF
    Placement of surgical instrumentation in pelvic trauma surgery is challenged by complex anatomy and narrow bone corridors, relying on intraoperative x-ray fluoroscopy for visualization and guidance. The rapid workflow and cost constraints of orthopaedic trauma surgery have largely prohibited widespread adoption of 3D surgical navigation. This thesis reports the development and evaluation of a method to achieve 3D guidance via automatic detection and localization of surgical instruments (specifically, Kirschner wires [K-wires]) in fluoroscopic images acquired within routine workflow. The detection method uses a neural network (Mask R-CNN) for segmentation and keypoint detection of K-wires in fluoroscopy, and correspondence of keypoints among multiple images is established by 3D backprojection and a rank-ordering of ray intersections. The accuracy of 3D K-wire localization was evaluated in a laboratory cadaver study as well as patient images drawn from an IRB-approved clinical study. The detection network successfully generalized from simulated training and validation images to cadaver and clinical images, achieving 87% recall and 98% precision. The geometric accuracy of K-wire tip location and direction in 2D fluoroscopy was 1.9 ± 1.6 mm and 1.8° ± 1.3°, respectively. Simulation studies demonstrated a corresponding mean error of 1.1 mm in 3D tip location and 2.3° in 3D direction. Cadaver and clinical studies demonstrated the feasibility of the approach in real data, although accuracy was reduced to with 1.7 ± 0.7 mm in 3D tip location and 6° ± 2° in 3D direction. Future studies aim to improve performance by increasing the volume and variety of images used in neural network training, particularly with respect to low-dose fluoroscopy (high noise levels) and complex fluoroscopic scenes with various types surgical instrumentation. Because the approach involves fast runtime and uses equipment (a mobile C-arm) and fluoroscopic images that are common in standard workflow, it may be suitable to broad utilization in orthopaedic trauma surgery

    CBCT-based navigation system for open liver surgery: accurate guidance toward mobile and deformable targets with a semi-rigid organ approximation and electromagnetic tracking of the liver

    Get PDF
    Purpose The surgical navigation system that provides guidance throughout the surgery can facilitate safer and more radical liver resections, but such a system should also be able to handle organ motion. This work investigates the accuracy of intraoperative surgical guidance during open liver resection, with a semi-rigid organ approximation and electromagnetic tracking of the target area.Methods The suggested navigation technique incorporates a preoperative 3D liver model based on diagnostic 4D MRI scan, intraoperative contrast-enhanced CBCT imaging and electromagnetic (EM) tracking of the liver surface, as well as surgical instruments, by means of six degrees-of-freedom micro-EM sensors.Results The system was evaluated during surgeries with 35 patients and resulted in an accurate and intuitive real-time visualization of liver anatomy and tumor's location, confirmed by intraoperative checks on visible anatomical landmarks. Based on accuracy measurements verified by intraoperative CBCT, the system's average accuracy was 4.0 +/- 3.0 mm, while the total surgical delay due to navigation stayed below 20 min.Conclusions The electromagnetic navigation system for open liver surgery developed in this work allows for accurate localization of liver lesions and critical anatomical structures surrounding the resection area, even when the liver was manipulated. However, further clinically integrating the method requires shortening the guidance-related surgical delay, which can be achieved by shifting to faster intraoperative imaging like ultrasound. Our approach is adaptable to navigation on other mobile and deformable organs, and therefore may benefit various clinical applications.Radiolog

    Machine Learning in Robotic Ultrasound Imaging: Challenges and Perspectives

    Full text link
    This article reviews the recent advances in intelligent robotic ultrasound (US) imaging systems. We commence by presenting the commonly employed robotic mechanisms and control techniques in robotic US imaging, along with their clinical applications. Subsequently, we focus on the deployment of machine learning techniques in the development of robotic sonographers, emphasizing crucial developments aimed at enhancing the intelligence of these systems. The methods for achieving autonomous action reasoning are categorized into two sets of approaches: those relying on implicit environmental data interpretation and those using explicit interpretation. Throughout this exploration, we also discuss practical challenges, including those related to the scarcity of medical data, the need for a deeper understanding of the physical aspects involved, and effective data representation approaches. Moreover, we conclude by highlighting the open problems in the field and analyzing different possible perspectives on how the community could move forward in this research area.Comment: Accepted by Annual Review of Control, Robotics, and Autonomous System

    Previous, current, and future stereotactic EEG techniques for localising epileptic foci

    Get PDF
    INTRODUCTION: Drug-resistant focal epilepsy presents a significant morbidity burden globally, and epilepsy surgery has been shown to be an effective treatment modality. Therefore, accurate identification of the epileptogenic zone for surgery is crucial, and in those with unclear noninvasive data, stereoencephalography is required. AREAS COVERED: This review covers the history and current practices in the field of intracranial EEG, particularly analyzing how stereotactic image-guidance, robot-assisted navigation, and improved imaging techniques have increased the accuracy, scope, and use of SEEG globally. EXPERT OPINION: We provide a perspective on the future directions in the field, reviewing improvements in predicting electrode bending, image acquisition, machine learning and artificial intelligence, advances in surgical planning and visualization software and hardware. We also see the development of EEG analysis tools based on machine learning algorithms that are likely to work synergistically with neurophysiology experts and improve the efficiency of EEG and SEEG analysis and 3D visualization. Improving computer-assisted planning to minimize manual input from the surgeon, and seamless integration into an ergonomic and adaptive operating theater, incorporating hybrid microscopes, virtual and augmented reality is likely to be a significant area of improvement in the near future
    corecore