16 research outputs found

    Intraoperative Imaging Modalities and Compensation for Brain Shift in Tumor Resection Surgery

    Get PDF
    Intraoperative brain shift during neurosurgical procedures is a well-known phenomenon caused by gravity, tissue manipulation, tumor size, loss of cerebrospinal fluid (CSF), and use of medication. For the use of image-guided systems, this phenomenon greatly affects the accuracy of the guidance. During the last several decades, researchers have investigated how to overcome this problem. The purpose of this paper is to present a review of publications concerning different aspects of intraoperative brain shift especially in a tumor resection surgery such as intraoperative imaging systems, quantification, measurement, modeling, and registration techniques. Clinical experience of using intraoperative imaging modalities, details about registration, and modeling methods in connection with brain shift in tumor resection surgery are the focuses of this review. In total, 126 papers regarding this topic are analyzed in a comprehensive summary and are categorized according to fourteen criteria. The result of the categorization is presented in an interactive web tool. The consequences from the categorization and trends in the future are discussed at the end of this work

    Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation

    Get PDF
    International audiencePurpose. During brain tumor surgery, planning and guidance are based on pre-operative images which do not account for brain-shift. However, this deformation is a major source of error in image-guided neurosurgery and affects the accuracy of the procedure. In this paper, we present a constraint-based biome-chanical simulation method to compensate for craniotomy-induced brain-shift that integrates the deformations of the blood vessels and cortical surface, using a single intraoperative ultrasound acquisition. Methods. Prior to surgery, a patient-specific biomechanical model is built from preoperative images, accounting for the vascular tree in the tumor region and brain soft tissues. Intraoperatively, a navigated ultrasound acquisition is performed directly in contact with the organ. Doppler and B-mode images are recorded simultaneously, enabling the extraction of the blood vessels and probe footprint respectively. A constraint-based simulation is then executed to register the pre-and intraoperative vascular trees as well as the cortical surface with the probe footprint. Finally, preoperative images are updated to provide the surgeon with images corresponding to the current brain shape for navigation. Results. The robustness of our method is first assessed using sparse and noisy synthetic data. In addition, quantitative results for five clinical cases are provided , first using landmarks set on blood vessels, then based on anatomical structures delineated in medical images. The average distances between paired vessels landmarks ranged from 3.51 to 7.32 (in mm) before compensation. With our method, on average 67% of the brain-shift is corrected (range [1.26; 2.33]) against 57% using one of the closest existing works (range [1.71; 2.84]). Finally, our method is proven to be fully compatible with a surgical workflow in terms of execution times and user interactions. Conclusion. In this paper, a new constraint-based biomechanical simulation method is proposed to compensate for craniotomy-induced brain-shift. While being efficient to correct this deformation, the method is fully integrable in a clinical process

    Medical Image Analysis: Progress over two decades and the challenges ahead

    Get PDF
    International audienceThe analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come

    Enhancing Registration for Image-Guided Neurosurgery

    Get PDF
    Pharmacologically refractive temporal lobe epilepsy and malignant glioma brain tumours are examples of pathologies that are clinically managed through neurosurgical intervention. The aims of neurosurgery are, where possible, to perform a resection of the surgical target while minimising morbidity to critical structures in the vicinity of the resected brain area. Image-guidance technology aims to assist this task by displaying a model of brain anatomy to the surgical team, which may include an overlay of surgical planning information derived from preoperative scanning such as the segmented resection target and nearby critical brain structures. Accurate neuronavigation is hindered by brain shift, the complex and non-rigid deformation of the brain that arises during surgery, which invalidates assumed rigid geometric correspondence between the neuronavigation model and the true shifted positions of relevant brain areas. Imaging using an interventional MRI (iMRI) scanner in a next-generation operating room can serve as a reference for intraoperative updates of the neuronavigation. An established clinical image processing workflow for iMRI-based guidance involves the correction of relevant imaging artefacts and the estimation of deformation due to brain shift based on non-rigid registration. The present thesis introduces two refinements aimed at enhancing the accuracy and reliability of iMRI-based guidance. A method is presented for the correction of magnetic susceptibility artefacts, which affect diffusion and functional MRI datasets, based on simulating magnetic field variation in the head from structural iMRI scans. Next, a method is presented for estimating brain shift using discrete non-rigid registration and a novel local similarity measure equipped with an edge-preserving property which is shown to improve the accuracy of the estimated deformation in the vicinity of the resected area for a number of cases of surgery performed for the management of temporal lobe epilepsy and glioma

    The state-of-the-art in ultrasound-guided spine interventions.

    Get PDF
    During the last two decades, intra-operative ultrasound (iUS) imaging has been employed for various surgical procedures of the spine, including spinal fusion and needle injections. Accurate and efficient registration of pre-operative computed tomography or magnetic resonance images with iUS images are key elements in the success of iUS-based spine navigation. While widely investigated in research, iUS-based spine navigation has not yet been established in the clinic. This is due to several factors including the lack of a standard methodology for the assessment of accuracy, robustness, reliability, and usability of the registration method. To address these issues, we present a systematic review of the state-of-the-art techniques for iUS-guided registration in spinal image-guided surgery (IGS). The review follows a new taxonomy based on the four steps involved in the surgical workflow that include pre-processing, registration initialization, estimation of the required patient to image transformation, and a visualization process. We provide a detailed analysis of the measurements in terms of accuracy, robustness, reliability, and usability that need to be met during the evaluation of a spinal IGS framework. Although this review is focused on spinal navigation, we expect similar evaluation criteria to be relevant for other IGS applications

    Dense Vision in Image-guided Surgery

    Get PDF
    Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces

    Advancing fluorescent contrast agent recovery methods for surgical guidance applications

    Get PDF
    Fluorescence-guided surgery (FGS) utilizes fluorescent contrast agents and specialized optical instruments to assist surgeons in intraoperatively identifying tissue-specific characteristics, such as perfusion, malignancy, and molecular function. In doing so, FGS represents a powerful surgical navigation tool for solving clinical challenges not easily addressed by other conventional imaging methods. With growing translational efforts, major hurdles within the FGS field include: insufficient tools for understanding contrast agent uptake behaviors, the inability to image tissue beyond a couple millimeters, and lastly, performance limitations of currently-approved contrast agents in accurately and rapidly labeling disease. The developments presented within this thesis aim to address such shortcomings. Current preclinical fluorescence imaging tools often sacrifice either 3D scale or spatial resolution. To address this gap in high-resolution, whole-body preclinical imaging tools available, the crux of this work lays on the development of a hyperspectral cryo-imaging system and image-processing techniques to accurately recapitulate high-resolution, 3D biodistributions in whole-animal experiments. Specifically, the goal is to correct each cryo-imaging dataset such that it becomes a useful reporter for whole-body biodistributions in relevant disease models. To investigate potential benefits of seeing deeper during FGS, we investigated short-wave infrared imaging (SWIR) for recovering fluorescence beyond the conventional top few millimeters. Through phantom, preclinical, and clinical SWIR imaging, we were able to 1) validate the capability of SWIR imaging with conventional NIR-I fluorophores, 2) demonstrate the translational benefits of SWIR-ICG angiography in a large animal model, and 3) detect micro-dose levels of an EGFR-targeted NIR-I probe during a Phase 0 clinical trial. Lastly, we evaluated contrast agent performances for FGS glioma resection and breast cancer margin assessment. To evaluate glioma-labeling performance of untargeted contrast agents, 3D agent biodistributions were compared voxel-by-voxel to gold-standard Gd-MRI and pathology slides. Finally, building on expertise in dual-probe ratiometric imaging at Dartmouth, a 10-pt clinical pilot study was carried out to assess the technique’s efficacy for rapid margin assessment. In summary, this thesis serves to advance FGS by introducing novel fluorescence imaging devices, techniques, and agents which overcome challenges in understanding whole-body agent biodistributions, recovering agent distributions at greater depths, and verifying agents’ performance for specific FGS applications

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph
    corecore