10 research outputs found

    Electrical Impedance Tomography Guided by Digital Twins and Deep Learning for Lung Monitoring

    Get PDF
    In recent years, there has been an increasing interest in applying electrical impedance tomography (EIT) in lung monitoring due to its advantages of being noninvasive, nonionizing, real time, and functional imaging with no harmful side effects. However, the EIT images reconstructed by traditional algorithms suffer from low spatial resolution. This article proposes a novel EIT-based lung monitoring scheme by using a 3-D digital twin lung model and a deep learning-based image reconstruction algorithm. Unlike the static numerical or experimental simulations used in other data-driven EIT imaging methods, our digital twin lung model incorporates the biomechanical and electrical properties of the lung to generate a more realistic and dynamic dataset. Additionally, the image reconstruction network (IR-Net) is used to learn the prior information in the dataset and accurately reconstruct the conductivity variation within the lungs during respiration. The results indicate that EIT using a guided digital twin and deep learning-based image reconstruction has better accuracy and anti-noise performance compared to traditional EIT. The proposed EIT imaging framework provides a new idea for efficiently creating labeled EIT data and has potential to be used in various data-driven methods for electrical biomedical imaging.</p

    Electrical Impedance Tomography Guided by Digital Twins and Deep Learning for Lung Monitoring

    Get PDF
    In recent years, there has been an increasing interest in applying electrical impedance tomography (EIT) in lung monitoring due to its advantages of being noninvasive, nonionizing, real time, and functional imaging with no harmful side effects. However, the EIT images reconstructed by traditional algorithms suffer from low spatial resolution. This article proposes a novel EIT-based lung monitoring scheme by using a 3-D digital twin lung model and a deep learning-based image reconstruction algorithm. Unlike the static numerical or experimental simulations used in other data-driven EIT imaging methods, our digital twin lung model incorporates the biomechanical and electrical properties of the lung to generate a more realistic and dynamic dataset. Additionally, the image reconstruction network (IR-Net) is used to learn the prior information in the dataset and accurately reconstruct the conductivity variation within the lungs during respiration. The results indicate that EIT using a guided digital twin and deep learning-based image reconstruction has better accuracy and anti-noise performance compared to traditional EIT. The proposed EIT imaging framework provides a new idea for efficiently creating labeled EIT data and has potential to be used in various data-driven methods for electrical biomedical imaging.</p

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    An unfitted radial basis function generated finite difference method applied to thoracic diaphragm simulations

    Full text link
    The thoracic diaphragm is the muscle that drives the respiratory cycle of a human being. Using a system of partial differential equations (PDEs) that models linear elasticity we compute displacements and stresses in a two-dimensional cross section of the diaphragm in its contracted state. The boundary data consists of a mix of displacement and traction conditions. If these are imposed as they are, and the conditions are not compatible, this leads to reduced smoothness of the solution. Therefore, the boundary data is first smoothed using the least-squares radial basis function generated finite difference (RBF-FD) framework. Then the boundary conditions are reformulated as a Robin boundary condition with smooth coefficients. The same framework is also used to approximate the boundary curve of the diaphragm cross section based on data obtained from a slice of a computed tomography (CT) scan. To solve the PDE we employ the unfitted least-squares RBF-FD method. This makes it easier to handle the geometry of the diaphragm, which is thin and non-convex. We show numerically that our solution converges with high-order towards a finite element solution evaluated on a fine grid. Through this simplified numerical model we also gain an insight into the challenges associated with the diaphragm geometry and the boundary conditions before approaching a more complex three-dimensional model

    Deformation analysis of surface and bronchial structures in intraoperative pneumothorax using deformable mesh registration

    Get PDF
    The positions of nodules can change because of intraoperative lung deflation, and the modeling of pneumothorax-associated deformation remains a challenging issue for intraoperative tumor localization. In this study, we introduce spatial and geometric analysis methods for inflated/deflated lungs and discuss heterogeneity in pneumothorax-associated lung deformation. Contrast-enhanced CT images simulating intraoperative conditions were acquired from live Beagle dogs. The images contain the overall shape of the lungs, including all lobes and internal bronchial structures, and were analyzed to provide a statistical deformation model that could be used as prior knowledge to predict pneumothorax. To address the difficulties of mapping pneumothorax CT images with topological changes and CT intensity shifts, we designed deformable mesh registration techniques for mixed data structures including the lobe surfaces and the bronchial centerlines. Three global-to-local registration steps were performed under the constraint that the deformation was spatially continuous and smooth, while matching visible bronchial tree structures as much as possible. The developed framework achieved stable registration with a Hausdorff distance of less than 1 mm and a target registration error of less than 5 mm, and visualized deformation fields that demonstrate per-lobe contractions and rotations with high variability between subjects. The deformation analysis results show that the strain of lung parenchyma was 35% higher than that of bronchi, and that deformation in the deflated lung is heterogeneous

    A Composite Material-based Computational Model for Diaphragm Muscle Biomechanical Simulation

    Get PDF
    Lung cancer is the most common cause of cancer related death among both men and women. Radiation therapy is the most widely used treatment for this disease. Motion compensation for tumor movement is often clinically important and biomechanics-based motion models may provide the most robust method as they are based on the physics of motion. In this study, we aim to develop a patient specific biomechanical model that predicts the deformation field of the diaphragm muscle during respiration. The first part of the project involved developing an accurate and adaptable micro-to-macro mechanical approach for skeletal muscle tissue modelling for application in a FE solver. The next objective was to develop the FE-based mechanical model of the diaphragm muscle based on patient specific 4D-CT data. The model shows adaptability to pathologies and may have the potential to be incorporated into respiratory models for the aid in treatment and diagnosis of diseases

    Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization

    Get PDF
    Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the non-linear kernel model to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2 ± 0.7 mm and a Hausdorff distance of 4.2 ± 2.3 mm throughout the respiratory motion

    A Heterogeneous Patient-Specific Biomechanical Model of the Lung for Tumor Motion Compensation and Effective Lung Radiation Therapy Planning

    Get PDF
    Radiation therapy is a main component of treatment for many lung cancer patients. However, the respiratory motion can cause inaccuracies in radiation delivery that can lead to treatment complications. In addition, the radiation-induced damage to healthy tissue limits the effectiveness of radiation treatment. Motion management methods have been developed to increase the accuracy of radiation delivery, and functional avoidance treatment planning has emerged to help reduce the chances of radiation-induced toxicity. In this work, we have developed biomechanical model-based techniques for tumor motion estimation, as well as lung functional imaging. The proposed biomechanical model accurately estimates lung and tumor motion/deformation by mimicking the physiology of respiration, while accounting for heterogeneous changes in the lung mechanics caused by COPD, a common lung cancer comorbidity. A biomechanics-based image registration algorithm is developed and is combined with an air segmentation algorithm to develop a 4DCT-based ventilation imaging technique, with potential applications in functional avoidance therapies

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications
    corecore