20 research outputs found
Preoperative liver registration for augmented monocular laparoscopy using backwardâforward biomechanical simulation
PURPOSE: Augmented reality for monocular laparoscopy from a preoperative volume such as CT is achieved in two steps. The first step is to segment the organ in the preoperative volume and reconstruct its 3D model. The second step is to register the preoperative 3D model to an initial intraoperative laparoscopy image. To date, there does not exist an automatic initial registration method to solve the second step for the liver in the de facto operating room conditions of monocular laparoscopy. Existing methods attempt to solve for both deformation and pose simultaneously, leading to nonconvex problems with no optimal solution algorithms. METHODS: We propose in contrast to break the problem down into two parts, solving for (i) deformation and (ii) pose. Part (i) simulates biomechanical deformations from the preoperative to the intraoperative state to predict the liverâs unknown intraoperative shape by modeling gravity, the abdominopelvic cavityâs pressure and boundary conditions. Part (ii) rigidly registers the simulated shape to the laparoscopy image using contour cues. RESULTS: Our formulation leads to a well-posed problem, contrary to existing methods. This is because it exploits strong environment priors to complement the weak laparoscopic visual cues. CONCLUSION: Quantitative results with in silico and phantom experiments and qualitative results with laparosurgery images for two patients show that our method outperforms the state-of-the-art in accuracy and registration time
Intraoperative Liver Surface Completion with Graph Convolutional VAE
In this work we propose a method based on geometric deep learning to predict
the complete surface of the liver, given a partial point cloud of the organ
obtained during the surgical laparoscopic procedure. We introduce a new data
augmentation technique that randomly perturbs shapes in their frequency domain
to compensate the limited size of our dataset. The core of our method is a
variational autoencoder (VAE) that is trained to learn a latent space for
complete shapes of the liver. At inference time, the generative part of the
model is embedded in an optimisation procedure where the latent representation
is iteratively updated to generate a model that matches the intraoperative
partial point cloud. The effect of this optimisation is a progressive non-rigid
deformation of the initially generated shape. Our method is qualitatively
evaluated on real data and quantitatively evaluated on synthetic data. We
compared with a state-of-the-art rigid registration algorithm, that our method
outperformed in visible areas
The Challenge of Augmented Reality in Surgery
Imaging has revolutionized surgery over the last 50 years. Diagnostic imaging is a key tool for deciding to perform surgery during disease management; intraoperative imaging is one of the primary drivers for minimally invasive surgery (MIS), and postoperative imaging enables effective follow-up and patient monitoring. However, notably, there is still relatively little interchange of information or imaging modality fusion between these different clinical pathway stages. This book chapter provides a critique of existing augmented reality (AR) methods or application studies described in the literature using relevant examples. The aim is not to provide a comprehensive review, but rather to give an indication of the clinical areas in which AR has been proposed, to begin to explain the lack of clinical systems and to provide some clear guidelines to those intending pursue research in this area
Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models
During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeonâs workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position.
One key challenge in this setting is the automatic estimation of the organâs current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organâs intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks.
To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system.
Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction
1.1 Motivation
1.1.1 Navigated Liver Surgery
1.1.2 Laparoscopic Liver Registration
1.2 Challenges in Laparoscopic Liver Registration
1.2.1 Preoperative Model
1.2.2 Intraoperative Data
1.2.3 Fusion/Registration
1.2.4 Data
1.3 Scope and Goals of this Work
1.3.1 Data-Driven, Biomechanical Model
1.3.2 Data-Driven Non-Rigid Registration
1.3.3 Building a Working Prototype
2 State of the Art
2.1 Rigid Registration
2.2 Non-Rigid Liver Registration
2.3 Neural Networks for Simulation and Registration
3 Theoretical Background
3.1 Liver
3.2 Laparoscopic Liver Resection
3.2.1 Staging Procedure
3.3 Biomechanical Simulation
3.3.1 Physical Balance Principles
3.3.2 Material Models
3.3.3 Numerical Solver: The Finite Element Method (FEM)
3.3.4 The Lagrangian Specification
3.4 Variables and Data in Liver Registration
3.4.1 Observable
3.4.2 Unknowns
4 Generating Simulations of Deforming Organs
4.1 Organ Volume
4.2 Forces and Boundary Conditions
4.2.1 Surface Forces
4.2.2 Zero-Displacement Boundary Conditions
4.2.3 Surrounding Tissues and Ligaments
4.2.4 Gravity
4.2.5 Pressure
4.3 Simulation
4.3.1 Static Simulation
4.3.2 Dynamic Simulation
4.4 Surface Extraction
4.4.1 Partial Surface Extraction
4.4.2 Surface Noise
4.4.3 Partial Surface Displacement
4.5 Voxelization
4.5.1 Voxelizing the Liver Geometry
4.5.2 Voxelizing the Displacement Field
4.5.3 Voxelizing Boundary Conditions
4.6 Pruning Dataset - Removing Unwanted Results
4.7 Data Augmentation
5 Deep Neural Networks for Biomechanical Simulation
5.1 Training Data
5.2 Network Architecture
5.3 Loss Functions and Training
6 Deep Neural Networks for Non-Rigid Registration
6.1 Training Data
6.2 Architecture
6.3 Loss
6.4 Training
6.5 Mesh Deformation
6.6 Example Application
7 Intraoperative Prototype
7.1 Image Acquisition
7.2 Stereo Calibration
7.3 Image Rectification, Disparity- and Depth- estimation
7.4 Liver Segmentation
7.4.1 Synthetic Image Generation
7.4.2 Automatic Segmentation
7.4.3 Manual Segmentation Modifier
7.5 SLAM
7.6 Dense Reconstruction
7.7 Rigid Registration
7.8 Non-Rigid Registration
7.9 Rendering
7.10 Robotic Operating System
8 Evaluation
8.1 Evaluation Datasets
8.1.1 In-Silico
8.1.2 Phantom Torso and Liver
8.1.3 In-Vivo, Human, Breathing Motion
8.1.4 In-Vivo, Human, Laparoscopy
8.2 Metrics
8.2.1 Mean Displacement Error
8.2.2 Target Registration Error (TRE)
8.2.3 Champfer Distance
8.2.4 Volumetric Change
8.3 Evaluation of the Synthetic Training Data
8.4 Data-Driven Biomechanical Model (DDBM)
8.4.1 Amount of Intraoperative Surface
8.4.2 Dynamic Simulation
8.5 Volume to Surface Registration Network (V2S-Net)
8.5.1 Amount of Intraoperative Surface
8.5.2 Dependency on Initial Rigid Alignment
8.5.3 Registration Accuracy in Comparison to Surface Noise
8.5.4 Registration Accuracy in Comparison to Material Stiffness
8.5.5 Champfer-Distance vs. Mean Displacement Error
8.5.6 In-vivo, Human Breathing Motion
8.6 Full Intraoperative Pipeline
8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map
8.6.2 Full Pipeline on Laparoscopic Human Data
8.7 Timing
9 Discussion
9.1 Intraoperative Model
9.2 Physical Accuracy
9.3 Limitations in Training Data
9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities
9.5 Ambiguity
9.6 Intraoperative Prototype
10 Conclusion
11 List of Publications
List of Figures
Bibliograph
A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery
Introduction
The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures.
Methods
This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation.
Results
The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure.
Discussion
Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process
Automatic, global registration in laparoscopic liver surgery
PURPOSE: The initial registration of a 3D pre-operative CT model to a 2D laparoscopic video image in augmented reality systems for liver surgery needs to be fast, intuitive to perform and with minimal interruptions to the surgical intervention. Several recent methods have focussed on using easily recognisable landmarks across modalities. However, these methods still need manual annotation or manual alignment. We propose a novel, fully automatic pipeline for 3D-2D global registration in laparoscopic liver interventions. METHODS: Firstly, we train a fully convolutional network for the semantic detection of liver contours in laparoscopic images. Secondly, we propose a novel contour-based global registration algorithm to estimate the camera pose without any manual input during surgery. The contours used are the anterior ridge and the silhouette of the liver. RESULTS: We show excellent generalisation of the semantic contour detection on test data from 8 clinical cases. In quantitative experiments, the proposed contour-based registration can successfully estimate a global alignment with as little as 30% of the liver surface, a visibility ratio which is characteristic of laparoscopic interventions. Moreover, the proposed pipeline showed very promising results in clinical data from 5 laparoscopic interventions. CONCLUSIONS: Our proposed automatic global registration could make augmented reality systems more intuitive and usable for surgeons and easier to translate to operating rooms. Yet, as the liver is deformed significantly during surgery, it will be very beneficial to incorporate deformation into our method for more accurate registration
Simulation GuidĂ©e par lâImage pour la RĂ©alitĂ© AugmentĂ©e durant la Chirurgie HĂ©patique
The main objective of this thesis is to provide surgeons with tools for pre and intra-operative decision support during minimally invasive hepaticsurgery. These interventions are usually based on laparoscopic techniques or, more recently, flexible endoscopy. During such operations, the surgeon tries to remove a significant number of liver tumors while preserving the functional role of the liver. This involves defining an optimal hepatectomy, i.e. ensuring that the volume of post-operative liver is at least at 55% of the original liver and the preserving at hepatic vasculature. Although intervention planning can now be considered on the basis of preoperative patient-specific, significant movements of the liver and its deformations during surgery data make this very difficult to use planning in practice. The work proposed in this thesis aims to provide augmented reality tools to be used in intra-operative conditions in order to visualize the position of tumors and hepatic vascular networks at any time.Lâobjectif principal de cette thĂšse est de fournir aux chirurgiens des outils dâaide Ă la dĂ©cision prĂ© et per-opĂ©ratoire lors dâinterventions minimalement invasives en chirurgie hĂ©patique. Ces interventions reposent en gĂ©nĂ©ral sur des techniques de laparoscopie ou plus rĂ©cemment dâendoscopie flexible. Lors de telles interventions, le chirurgien cherche Ă retirer un nombre souvent important de tumeurs hĂ©patiques, tout en prĂ©servant le rĂŽle fonctionnel du foie. Cela implique de dĂ©finir une hĂ©patectomie optimale, câest Ă dire garantissant un volume du foie post-opĂ©ratoire dâau moins 55% du foie initial et prĂ©servant au mieux la vascularisation hĂ©patique. Bien quâune planification de lâintervention puisse actuellement sâenvisager sur la base de donnĂ©es prĂ©-opĂ©ratoire spĂ©cifiques au patient, les mouvements importants du foie et ses dĂ©formations lors de lâintervention rendent cette planification trĂšs difficile Ă exploiter en pratique. Les travaux proposĂ©s dans cette thĂšse visent Ă fournir des outils de rĂ©alitĂ© augmentĂ©e utilisables en conditions per-opĂ©ratoires et permettant de visualiser Ă chaque instant la position des tumeurs et rĂ©seaux vasculaires hĂ©patiques
Automatic registration of 3D models to laparoscopic video images for guidance during liver surgery
Laparoscopic liver interventions offer significant advantages over open surgery, such as less pain and trauma, and shorter recovery time for the patient. However, they also bring challenges for the surgeons such as the lack of tactile feedback, limited field of view and occluded anatomy. Augmented reality (AR) can potentially help during laparoscopic liver interventions by displaying sub-surface structures (such as tumours or vasculature). The initial registration between the 3D model extracted from the CT scan and the laparoscopic video feed is essential for an AR system which should be efficient, robust, intuitive to use and with minimal disruption to the surgical procedure. Several challenges of registration methods in laparoscopic interventions include the deformation of the liver due to gas insufflation in the abdomen, partial visibility of the organ and lack of prominent geometrical or texture-wise landmarks. These challenges are discussed in detail and an overview of the state of the art is provided. This research project aims to provide the tools to move towards a completely automatic registration. Firstly, the importance of pre-operative planning is discussed along with the characteristics of the liver that can be used in order to constrain a registration method. Secondly, maximising the amount of information obtained before the surgery, a semi-automatic surface based method is proposed to recover the initial rigid registration irrespective of the position of the shapes. Finally, a fully automatic 3D-2D rigid global registration is proposed which estimates a global alignment of the pre-operative 3D model using a single intra-operative image. Moving towards incorporating the different liver contours can help constrain the registration, especially for partial surfaces. Having a robust, efficient AR system which requires no manual interaction from the surgeon will aid in the translation of such approaches to the clinics
Artificial Intelligence Based Deep Bayesian Neural Network (DBNN) Toward Personalized Treatment of Leukemia with Stem Cells
The dynamic development of computer and software technology in recent years was accompanied by the expansion and widespread implementation of artificial intelligence (AI) based methods in many aspects of human life. A prominent field where rapid progress was observed are highâthroughput methods in biology that generate big amounts of data that need to be processed and analyzed. Therefore, AI methods are more and more applied in the biomedical field, among others for RNAâprotein binding sites prediction, DNA sequence function prediction, proteinâprotein interaction prediction, or biomedical image classification. Stem cells are widely used in biomedical research, e.g., leukemia or other disease studies. Our proposed approach of Deep Bayesian Neural Network (DBNN) for the personalized treatment of leukemia cancer has shown a significant tested accuracy for the model. DBNNs used in this study was able to classify images with accuracy exceeding 98.73%. This study depicts that the DBNN can classify cell cultures only based on unstained light microscope images which allow their further use. Therefore, building a bayesianâbased model to great help during commercial cell culturing, and possibly a first step in the process of creating an automated/semiautomated neural networkâbased model for classification of good and bad quality cultures when images of such will be available