35 research outputs found

    Patient-specific simulation environment for surgical planning and preoperative rehearsal

    Get PDF
    Surgical simulation is common practice in the fields of surgical education and training. Numerous surgical simulators are available from commercial and academic organisations for the generic modelling of surgical tasks. However, a simulation platform is still yet to be found that fulfils the key requirements expected for patient-specific surgical simulation of soft tissue, with an effective translation into clinical practice. Patient-specific modelling is possible, but to date has been time-consuming, and consequently costly, because data preparation can be technically demanding. This motivated the research developed herein, which addresses the main challenges of biomechanical modelling for patient-specific surgical simulation. A novel implementation of soft tissue deformation and estimation of the patient-specific intraoperative environment is achieved using a position-based dynamics approach. This modelling approach overcomes the limitations derived from traditional physically-based approaches, by providing a simulation for patient-specific models with visual and physical accuracy, stability and real-time interaction. As a geometrically- based method, a calibration of the simulation parameters is performed and the simulation framework is successfully validated through experimental studies. The capabilities of the simulation platform are demonstrated by the integration of different surgical planning applications that are found relevant in the context of kidney cancer surgery. The simulation of pneumoperitoneum facilitates trocar placement planning and intraoperative surgical navigation. The implementation of deformable ultrasound simulation can assist surgeons in improving their scanning technique and definition of an optimal procedural strategy. Furthermore, the simulation framework has the potential to support the development and assessment of hypotheses that cannot be tested in vivo. Specifically, the evaluation of feedback modalities, as a response to user-model interaction, demonstrates improved performance and justifies the need to integrate a feedback framework in the robot-assisted surgical setting.Open Acces

    Automatic registration of 3D models to laparoscopic video images for guidance during liver surgery

    Get PDF
    Laparoscopic liver interventions offer significant advantages over open surgery, such as less pain and trauma, and shorter recovery time for the patient. However, they also bring challenges for the surgeons such as the lack of tactile feedback, limited field of view and occluded anatomy. Augmented reality (AR) can potentially help during laparoscopic liver interventions by displaying sub-surface structures (such as tumours or vasculature). The initial registration between the 3D model extracted from the CT scan and the laparoscopic video feed is essential for an AR system which should be efficient, robust, intuitive to use and with minimal disruption to the surgical procedure. Several challenges of registration methods in laparoscopic interventions include the deformation of the liver due to gas insufflation in the abdomen, partial visibility of the organ and lack of prominent geometrical or texture-wise landmarks. These challenges are discussed in detail and an overview of the state of the art is provided. This research project aims to provide the tools to move towards a completely automatic registration. Firstly, the importance of pre-operative planning is discussed along with the characteristics of the liver that can be used in order to constrain a registration method. Secondly, maximising the amount of information obtained before the surgery, a semi-automatic surface based method is proposed to recover the initial rigid registration irrespective of the position of the shapes. Finally, a fully automatic 3D-2D rigid global registration is proposed which estimates a global alignment of the pre-operative 3D model using a single intra-operative image. Moving towards incorporating the different liver contours can help constrain the registration, especially for partial surfaces. Having a robust, efficient AR system which requires no manual interaction from the surgeon will aid in the translation of such approaches to the clinics

    Database-Based Estimation of Liver Deformation under Pneumoperitoneum for Surgical Image-Guidance and Simulation

    Get PDF
    The insufflation of the abdomen in laparoscopic liver surgery leads to significant deformation of the liver. The estimation of the shape and position of the liver after insufflation has many important applications, such as providing surface-based registration algorithms used in image guidance with an initial guess and realistic patient-specific surgical simulation. Our proposed algorithm computes a deformation estimate for a patient subject from a database of known insufflation deformations, as a weighted average. The database is built from pre-operative and intra-operative 3D image segmentations. The estimation pipeline also comprises a biomechanical simulation to incorporate patient-specific boundary conditions (BCs) and eliminate any non-physical deformation arising from the computation of the deformation as a weighted average. We have evaluated the accuracy of our intra-subject registration, used for the computation of the displacements stored in the database, and our liver deformation predictions based on segmented, in-vivo porcine CT image data from 5 animals and manually selected vascular landmarks. We found root mean squared (RMS) target registration errors (TREs) of 2.96-11.31mm after intra-subject registration. For our estimated deformation, we found an RMS TRE of 5.82-11.47mm for four of the subjects, on one outlier subject the method failed

    Technologies for Biomechanically-Informed Image Guidance of Laparoscopic Liver Surgery

    Get PDF
    Laparoscopic surgery for liver resection has a number medical advantages over open surgery, but also comes with inherent technical challenges. The surgeon only has a very limited field of view through the imaging modalities routinely employed intra-operatively, laparoscopic video and ultrasound, and the pneumoperitoneum required to create the operating space and gaining access to the organ can significantly deform and displace the liver from its pre-operative configuration. This can make relating what is visible intra-operatively to the pre-operative plan and inferring the location of sub-surface anatomy a very challenging task. Image guidance systems can help overcome these challenges by updating the pre-operative plan to the situation in theatre and visualising it in relation to the position of surgical instruments. In this thesis, I present a series of contributions to a biomechanically-informed image-guidance system made during my PhD. The most recent one is work on a pipeline for the estimation of the post-insufflation configuration of the liver by means of an algorithm that uses a database of segmented training images of patient abdomens where the post-insufflation configuration of the liver is known. The pipeline comprises an algorithm for inter and intra-subject registration of liver meshes by means of non-rigid spectral point-correspondence finding. My other contributions are more fundamental and less application specific, and are all contained and made available to the public in the NiftySim open-source finite element modelling package. Two of my contributions to NiftySim are of particular interest with regards to image guidance of laparoscopic liver surgery: 1) a novel general purpose contact modelling algorithm that can be used to simulate contact interactions between, e.g., the liver and surrounding anatomy; 2) membrane and shell elements that can be used to, e.g., simulate the Glisson capsule that has been shown to significantly influence the organ’s measured stiffness

    Nouvelles méthodes numériques pour la simulation temps-réel des déformations des tissus mous dans le cadre de l’assistance peropératoire

    Get PDF
    This thesis addresses the problem soft tissue simulation for augmented reality applications in liver surgery assistance and, more specifically, the implementation of a non-rigid registration pipeline to be used by the medical staff to generate interactive deformations of a patient specific liver three-dimensional virtual representation. A formal physics-based framework is first defined and used as the basis for the construction of a biomechanical model capable of producing realistic deformations. Four basic requirements guided the development of the model: accuracy, speed, stability and simplicity of implementation. Meshless and immersed-boundary methods are both considered as alternatives to the traditional finite element method. A formal non-rigid registration algorithm is finally documented and tested with real-life scenarios. A comparison with new and rising machine learning and neural network solutions is also provided.Cette thèse aborde le problème de simulation des tissus mous pour les applications de réalité augmentée en assistance peropératoire du foie et, plus précisément, la mise en oeuvre d'une procédure automatique de recalage non rigide entre une reconstruction préopératoire du foie d'un patient et les données acquises en temps réel pendant la chirurgie. Un cadre formel basé sur la physique est d'abord défini et utilisé comme base pour la construction d'un modèle biomécanique capable de reproduire les déformations du foie. Quatre directives de recherche ont guidé le développement du modèle : la précision, la rapidité, la stabilité et la simplicité de mise en oeuvre. Les méthodes sans maillage et les méthodes aux frontières immergées sont deux considérées comme des alternatives à la méthode traditionnelle des éléments finis. Un algorithme complet de recalage non rigide est documenté et testé avec des scénarios réels. Finalement, une introduction des émergentes en apprentissage automatique et réseaux de neurones est également fournie

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Registration of ultrasound and computed tomography for guidance of laparoscopic liver surgery

    Get PDF
    Laparoscopic Ultrasound (LUS) imaging is a standard tool used for image-guidance during laparoscopic liver resection, as it provides real-time information on the internal structure of the liver. However, LUS probes are di cult to handle and their resulting images hard to interpret. Additionally, some anatomical targets such as tumours are not always visible, making the LUS guidance less e ective. To solve this problem, registration between the LUS images and a pre-operative Computed Tomography (CT) scan using information from blood vessels has been previously proposed. By merging these two modalities, the relative position between the LUS images and the anatomy of CT is obtained and both can be used to guide the surgeon. The problem of LUS to CT registration is specially challenging, as besides being a multi-modal registration, the eld of view of LUS is signi cantly smaller than that of CT. Therefore, this problem becomes poorly constrained and typically an accurate initialisation is needed. Also, the liver is highly deformed during laparoscopy, complicating the problem further. So far, the methods presented in the literature are not clinically feasible as they depend on manually set correspondences between both images. In this thesis, a solution for this registration problem that may be more transferable to the clinic is proposed. Firstly, traditional registration approaches comprised of manual initialisation and optimisation of a cost function are studied. Secondly, it is demonstrated that a globally optimal registration without a manual initialisation is possible. Finally, a new globally optimal solution that does not require commonly used tracking technologies is proposed and validated. The resulting approach provides clinical value as it does not require manual interaction in the operating room or tracking devices. Furthermore, the proposed method could potentially be applied to other image-guidance problems that require registration between ultrasound and a pre-operative scan

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Registration of prostate surfaces for image-guided robotic surgery via the da Vinci System

    Get PDF
    Organ-confined prostate cancer represents a commonly diagnosed cancer among men rendering an early diagnosis and screening a necessity. The prostate laparoscopic surgery using the da Vinci system is a minimally invasive, computer assisted and image-guided surgery application that provides surgeons with (i) navigational assistance by displaying targeting lesions of the intraoperative prostate anatomy onto aligned preoperative high-field magnetic resonance imaging (MRI) scans of the pelvis; and (ii) an effective clinical management of intra-abdominal cancers in real time. Such an image guidance system can improve both functional and oncological outcomes as well as augment the learning curve of the process increasing simultaneously the eligibility of patients for surgical resection. By segmenting MRI scans into 3D models of intraprostatic anatomy preoperatively, and overlaying them onto 3D stereoendoscopic images acquired intraoperatively using the da Vinci surgical system, a graphical representation of intraoperative anatomy can be provided for surgical navigation. The preoperative MRI surfaces are full 3D models and the stereoendoscopic images represent partial 3D views of the prostate due to occlusion. Hence achieving an accurate non-rigid image registration of full prostate surfaces onto occluded ones in real time becomes of critical importance, especially for use intraoperatively with the stereoendoscopic and MRI imaging modalities. This work exploits the registration accuracy that can be achieved from the application of selected state-of-the-art non-rigid registration algorithms and in doing so identifies the most accurate technique(s) for registration of full prostate surfaces onto occluded ones; a series of rigorous computational registration experiments is performed on synthetic target prostate data, which are aligned manually onto the MRI prostate models before registration is initiated. This effort extends to using real target prostate data leading to visually acceptable non-rigid registration results. A great deal of emphasis is placed on examining the capacity of the selected non-rigid algorithms to recover the deformation of the intraoperative prostate surfaces; the deformation of prostate can become pronounced during the surgical intervention due to surgical-induced anatomical deformities and pathological or other factors. The warping accuracy of the non-rigid registration algorithms is measured within the space of common overlap (established between the full MRI model and the target scene) and beyond. From the results of the registrations to occluded and deformed prostate surfaces (in the space beyond common overlap) it is concluded that the modified versions of the Kernel Correlation/Thin-plane Spline (KC/TPS) and Gaussian Mixture Model/Thin-plane Spline (GMM/TPS) methodologies can provide the clinical accuracy required for image-guided prostate surgery procedures (performed by the da Vinci system) as long as the size of the target scene is greater than ca. 30% of the full MRI surface. For the modified KC/TPS and GMM/TPS non-rigid registration techniques to be clinically acceptable when the measurement noise is also included in the simulations: (i) the size of the target model must be greater than ca. 38% of the full MRI surface; (ii) the standard deviation σ of the contributing Gaussian noise must be less than 0.345 for μ=0; and (iii) the observed deformation must not be characterized by excessively increased complexity. Otherwise the contribution of Gaussian noise must be explicitly parameterized in the objective cost functions of these non-rigid algorithms. The Expectation Maximization/Thin-plane Spline (EM/TPS) non-rigid registration algorithm cannot recover the prostate surface deformation accurately in full-model-to-occluded-model registrations due to the way that the correspondences are estimated. However, EM/TPS is more accurate than KC+TPS and GMM+TPS in recovering the deformation of the prostate surface in full-model-to-full-model registrations
    corecore