805 research outputs found

    Non-rigid registration of liver ct images for ct-guided ablation of liver tumors

    Get PDF
    CT-guided percutaneous ablation for liver cancer treatment is a relevant technique for patients not eligible for surgery and with tumors that are inconspicuous on US imaging. The lack of real-time imaging and the use of a limited amount of CT contrast agent make targeting the tumor with the needle challenging. In this study, we evaluate a registration framework that allows the integration of diagnostic pre-operative contrast enhanced CT images and intra-operative non-contrast enhanced CT images to improve image guidance in the intervention. The liver and tumor are segmented in the pre-operative contrast enhanced CT images. Next, the contrast enhanced image is registered to the intra-operative CT images in a two-stage approach. First, the contrast-enhanced diagnostic image is non-rigidly registered to a non-contrast enhanced image that is conventionally acquired at the start of the intervention. In case the initial registration is not sufficiently accurate, a refinement step is applied using non-rigid registration method with a local rigidity term. In the second stage, the intra-operative CT-images that are used to check the needle position, which often consist of only a few slices, are registered rigidly to the intra-operative image that was acquired at the start of the intervention. Subsequently, the diagnostic image is registered to the current intra-operative image, using both transformations, this allows the visualization of the tumor region extracted from pre-operative data in the intra-operative CT images containing needle. The method is evaluated on imaging data of 19 patients at the Erasmus MC. Quantitative evaluation is performed using the Dice metric, mean surface distance of the liver border and corresponding landmarks in the diagnostic and the intra-operative images. The registration of the diagnostic CT image to the initial intra-operative CT image did not require a refinement step in 13 cases. For those cases, the resulting registration had a Dice coefficient for the livers of 91.4%, a mean surface distance of 4.4 mm and a mean distance between corresponding landmarks of 4.7 mm. For the three cases with a refinement step, the registration result significantly improved (p<0.05) compared to the result of the initial non rigid registration method (DICE of 90.3% vs 71.3% and mean surface distance of 5.1 mm vs 11.3 mm and mean distanc

    Image fusion using CT, MRI and PET for treatment planning, navigation and follow up in percutaneous RFA

    No full text
    Aim: To evaluate the feasibility of fusion of morphologic and functional imaging modalities to facilitate treatment planning, probe placement, probe re-positioning, and early detection of residual disease following radiofrequency ablation (RFA) of cancer. Methods: Multi-modality datasets were separately acquired that included functional (FDG-PET and DCE-MRI) and standard morphologic studies (CT and MRI). Different combinations of imaging modalities were registered and fused prior to, during, and following percutaneous image-guided tumor ablation with radiofrequency. Different algorithms and visualization tools were evaluated for both intra-modality and inter-modality image registration using the software MIPAV (Medical Image Processing, Analysis and Visualization). Semi-automated and automated registration algorithms were used on astandard PC workstation: 1) landmark-based least-squares rigid registration, 2) landmark-based thin-plate spline elastic registration, and 3) automatic voxel-similarity, affine registration. Results: Intra- and inter-modality image fusion were successfully performed prior to, during and after RFA procedures. Fusion of morphologic and functional images provided a useful view of the spatial relationship of lesion structure and functional significance. Fused axial images and segmented three-dimensional surface models were used for treatment planning and post-RFA evaluation, to assess potential for optimizing needle placement during procedures. Conclusion: Fusion of morphologic and functional images is feasible before, during and after radiofrequency ablation of tumors in abdominal organs. For routine use, the semi-automated registration algorithms may be most practical. Image fusion may facilitate interventional procedures like RFA and should be further evaluated

    A Computational Image-Based Guidance System for Precision Laparoscopy

    Get PDF
    This dissertation presents our progress toward the goal of building a computational image-based guidance system for precision laparoscopy; in particular, laparoscopic liver resection. As we aim to keep our working goal as simple as possible, we have focused on the most important questions of laparoscopy - predicting the new location of tumors and resection plane after a liver maneuver during surgery. Our approach was to build a mechanical model of the organ based on pre-operative images and register it to intra-operative data. We proposed several practical and cost-effective methods to obtain the intra-operative data in the real procedure. We integrated all of them into a framework on which we could develop new techniques without redoing everything. To test the system, we did an experiment with a porcine liver in a controlled setup: a wooden lever was used to elevate a part of the liver to access the posterior of the liver. We were able to confirm that our model has decent accuracy for tumor location (approximately 2 mm error) and resection plane (1% difference in remaining liver volume after resection). However, the overall shape of the liver and the fiducial markers still left a lot to be desired. For further corrections to the model, we also developed an algorithm to reconstruct the 3D surface of the liver utilizing Smart Trocars, a new surgical instrument recognition system. The algorithm had been verified by an experiment on a plastic model using the laparoscopic camera as a mean to obtain surface images. This method had millimetric accuracy provided the angle between two endoscope views is not too small. In an effort to transit our research from porcine livers to human livers, in-vivo experiments had been conducted on cadavers. From those studies, we found a new method that used a high-frequency ventilator to eliminate respiratory motion. The framework showed the potential to work on real organs in clinical settings. Hence, the studies on cadavers needed to be continued to improve those techniques and complete the guidance system.Computer Science, Department o

    3D registration of MR and X-ray spine images using an articulated model

    Get PDF
    Présentation: Cet article a été publié dans le journal : Computerised medical imaging and graphics (CMIG). Le but de cet article est de recaler les vertèbres extraites à partir d’images RM avec des vertèbres extraites à partir d’images RX pour des patients scoliotiques, en tenant compte des déformations non-rigides due au changement de posture entre ces deux modalités. À ces fins, une méthode de recalage à l’aide d’un modèle articulé est proposée. Cette méthode a été comparée avec un recalage rigide en calculant l’erreur sur des points de repère, ainsi qu’en calculant la différence entre l’angle de Cobb avant et après recalage. Une validation additionelle de la méthode de recalage présentée ici se trouve dans l’annexe A. Ce travail servira de première étape dans la fusion des images RM, RX et TP du tronc complet. Donc, cet article vérifie l’hypothèse 1 décrite dans la section 3.2.1.Abstract This paper presents a magnetic resonance image (MRI)/X-ray spine registration method that compensates for the change in the curvature of the spine between standing and prone positions for scoliotic patients. MRIs in prone position and X-rays in standing position are acquired for 14 patients with scoliosis. The 3D reconstructions of the spine are then aligned using an articulated model which calculates intervertebral transformations. Results show significant decrease in regis- tration error when the proposed articulated model is compared with rigid registration. The method can be used as a basis for full body MRI/X-ray registration incorporating soft tissues for surgical simulation.Canadian Institute of Health Research (CIHR

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph

    Applications of a Biomechanical Patient Model for Adaptive Radiation Therapy

    Get PDF
    Biomechanical patient modeling incorporates physical knowledge of the human anatomy into the image processing that is required for tracking anatomical deformations during adaptive radiation therapy, especially particle therapy. In contrast to standard image registration, this enforces bio-fidelic image transformation. In this thesis, the potential of a kinematic skeleton model and soft tissue motion propagation are investigated for crucial image analysis steps in adaptive radiation therapy. The first application is the integration of the kinematic model in a deformable image registration process (KinematicDIR). For monomodal CT scan pairs, the median target registration error based on skeleton landmarks, is smaller than (1.6 ± 0.2) mm. In addition, the successful transferability of this concept to otherwise challenging multimodal registration between CT and CBCT as well as CT and MRI scan pairs is shown to result in median target registration error in the order of 2 mm. This meets the accuracy requirement for adaptive radiation therapy and is especially interesting for MR-guided approaches. Another aspect, emerging in radiotherapy, is the utilization of deep-learning-based organ segmentation. As radiotherapy-specific labeled data is scarce, the training of such methods relies heavily on augmentation techniques. In this work, the generation of synthetically but realistically deformed scans used as Bionic Augmentation in the training phase improved the predicted segmentations by up to 15% in the Dice similarity coefficient, depending on the training strategy. Finally, it is shown that the biomechanical model can be built-up from automatic segmentations without deterioration of the KinematicDIR application. This is essential for use in a clinical workflow

    Computational ultrasound tissue characterisation for brain tumour resection

    Get PDF
    In brain tumour resection, it is vital to know where critical neurovascular structuresand tumours are located to minimise surgical injuries and cancer recurrence. Theaim of this thesis was to improve intraoperative guidance during brain tumourresection by integrating both ultrasound standard imaging and elastography in thesurgical workflow. Brain tumour resection requires surgeons to identify the tumourboundaries to preserve healthy brain tissue and prevent cancer recurrence. Thisthesis proposes to use ultrasound elastography in combination with conventionalultrasound B-mode imaging to better characterise tumour tissue during surgery.Ultrasound elastography comprises a set of techniques that measure tissue stiffness,which is a known biomarker of brain tumours. The objectives of the researchreported in this thesis are to implement novel learning-based methods for ultrasoundelastography and to integrate them in an image-guided intervention framework.Accurate and real-time intraoperative estimation of tissue elasticity can guide towardsbetter delineation of brain tumours and improve the outcome of neurosurgery. We firstinvestigated current challenges in quasi-static elastography, which evaluates tissuedeformation (strain) by estimating the displacement between successive ultrasoundframes, acquired before and after applying manual compression. Recent approachesin ultrasound elastography have demonstrated that convolutional neural networkscan capture ultrasound high-frequency content and produce accurate strain estimates.We proposed a new unsupervised deep learning method for strain prediction, wherethe training of the network is driven by a regularised cost function, composed of asimilarity metric and a regularisation term that preserves displacement continuityby directly optimising the strain smoothness. We further improved the accuracy of our method by proposing a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. We then demonstrateinitial results towards extending our ultrasound displacement estimation method toshear wave elastography, which provides a quantitative estimation of tissue stiffness.Furthermore, this thesis describes the development of an open-source image-guidedintervention platform, specifically designed to combine intra-operative ultrasoundimaging with a neuronavigation system and perform real-time ultrasound tissuecharacterisation. The integration was conducted using commercial hardware andvalidated on an anatomical phantom. Finally, preliminary results on the feasibilityand safety of the use of a novel intraoperative ultrasound probe designed for pituitarysurgery are presented. Prior to the clinical assessment of our image-guided platform,the ability of the ultrasound probe to be used alongside standard surgical equipmentwas demonstrated in 5 pituitary cases

    Preoperative Magnetic Resonance and Intraoperative Ultrasound Fusion Imaging for Real-Time Neuronavigation in Brain Tumor Surgery = Pr&#228;operative MRI- und intraoperative Ultraschallfusion f&#252;r die Echtzeit-Neuronavigation in der Neurochirurgie von Hirntumoren

    Get PDF
    Purpose: Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. Materials and Methods: We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. Results: 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was < \u30083mm in all cases (100%). Conclusion: Neuro-navigation using intraoperative US integrated with preoperative MRI is reliable, accurate and user-friendly. Moreover, the adjustments are very helpful in correcting brain shift and tissue distortion. This integrated system allows true real-time feedback during surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and orientation.Brain Shift und Gewebeverschiebung w\ue4hrend der chirurgischen Entfernung intrakranialer Raumforderungen sind die limitierenden Faktoren bei der Neuronavigation (NN), welche aktuell haupts\ue4chlich pr\ue4operative Bilder einsetzt. Ultraschall (US) als Echtzeit-Bildgebung wird bei neurochirurgischen Prozeduren zunehmend angewandt. Vielen Neurochirurgen fehlt aber die US Expertise, da schon in der Ausbildung standarisierte (typisch axiale) CT und MRT Schnittbilder f\ufcr die Navigation bevorzugt eingesetzt werden und somit die Sicherheit bei der sonografischen Identifikation anatomischer Strukturen fehlt. Daher ist eine intraoperative Echtzeitfusion zwischen pr\ue4operativen CT bzw. MRT Bildern und intraoperativem Ultraschall (ioUS) im Rahmen der virtuellen Navigation (VN) au ferordentlich w\ufcnschenswert. Wir pr\ue4sentieren hier die bei uns angewandte Methode f\ufcr dieEchtzeitnavigation bei der Entfernung verschiedener Hirntumoren. Material und Methoden: Wir wandten die Bildfusion mit virtueller Navigation bei der chirurgischen Entfernung von Hirntumoren an. Zum Einsatz kam ein Neuronavigationssystem, welches intraoperative Ultraschallbilder mit pr\ue4operativen MRT Bildern in Echtzeit \ufcberlagert und zu jedem US Bild simultan die dazu passende ko-planare MRTSchnittebene anzeigt. Ergebnisse: Die US-basierte Neuronavigation wurde bei der Operation von 58 Patienten mit Hirntumoren eingesetzt. In allen F\ue4llen war der Fehler der initialen (externen) Registrierung, welche anhand von anatomischen Landmarken erfolgte, unterhalb von 2mm und die Kraniotomie konnte korrekt angesetzt werden. Die Bildqualit\ue4t des transduralen Ultraschalls war gut und die L\ue4sion konnte bei allen Patienten detektiert und in allen Achsen vermessen werden. Die Korrektur von Brain Shift sowie Gewebeverschiebung gelang erfolgreich in 42 F\ue4llen zur Wiederherstel lung der intraoperativen Co-Registrierung. Die Genauigkeit der cberlagerung von ioUS und MRT wurde intraoperativ anhand der Visualisierung anatomischerLandmarken \ufcberpr\ufcft und der Fehler lag in allen F\ue4llen (100 %) unterhalb von 3mm. Schlussfolgerung: Neuronavigation mit Hilfe von in pr\ue4operative MRT Bilder integrierten intraoperativen US Bildern ist eine zuverl\ue4ssige, genaue und anwenderfreundliche neue Technologie. Brain Shift und Gewebeverlagerungen k\uf6nnen anhand verschiedener Einstellungsm\uf6glichkeiten am System erfolgreich intraoperativ korrigiert werden. Das integrierte System erm\uf6glicht eine intraoperative cberpr\ufcfung der Navigation in Echtzeit und ist dabei kosteng\ufcnstiger und weniger Zeit aufw\ue4ndig als andere intraoperative Bild-gebende Verfahren, trotzdem aber hoch pr\ue4zise

    Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations

    Get PDF
    There is no denying how machine learning and computer vision have grown in the recent years. Their highest advantages lie within their automation, suitability, and ability to generate astounding results in a matter of seconds in a reproducible manner. This is aided by the ubiquitous advancements reached in the computing capabilities of current graphical processing units and the highly efficient implementation of such techniques. Hence, in this paper, we survey the key studies that are published between 2014 and 2020, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic-tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and further partitioned if the amount of works that fall under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites, containing masks of the aforementioned tissues, are thoroughly discussed, highlighting the organizers original contributions, and those of other researchers. Also, the metrics that are used excessively in literature are mentioned in our review stressing their relevancy to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing such as the scarcity of many studies on the vessels segmentation challenge, and why their absence needs to be dealt with in an accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver tissues segmentation based on automated ML-based technique
    corecore