335 research outputs found

    Respiratory organ motion in interventional MRI : tracking, guiding and modeling

    Get PDF
    Respiratory organ motion is one of the major challenges in interventional MRI, particularly in interventions with therapeutic ultrasound in the abdominal region. High-intensity focused ultrasound found an application in interventional MRI for noninvasive treatments of different abnormalities. In order to guide surgical and treatment interventions, organ motion imaging and modeling is commonly required before a treatment start. Accurate tracking of organ motion during various interventional MRI procedures is prerequisite for a successful outcome and safe therapy. In this thesis, an attempt has been made to develop approaches using focused ultrasound which could be used in future clinically for the treatment of abdominal organs, such as the liver and the kidney. Two distinct methods have been presented with its ex vivo and in vivo treatment results. In the first method, an MR-based pencil-beam navigator has been used to track organ motion and provide the motion information for acoustic focal point steering, while in the second approach a hybrid imaging using both ultrasound and magnetic resonance imaging was combined for advanced guiding capabilities. Organ motion modeling and four-dimensional imaging of organ motion is increasingly required before the surgical interventions. However, due to the current safety limitations and hardware restrictions, the MR acquisition of a time-resolved sequence of volumetric images is not possible with high temporal and spatial resolution. A novel multislice acquisition scheme that is based on a two-dimensional navigator, instead of a commonly used pencil-beam navigator, was devised to acquire the data slices and the corresponding navigator simultaneously using a CAIPIRINHA parallel imaging method. The acquisition duration for four-dimensional dataset sampling is reduced compared to the existing approaches, while the image contrast and quality are improved as well. Tracking respiratory organ motion is required in interventional procedures and during MR imaging of moving organs. An MR-based navigator is commonly used, however, it is usually associated with image artifacts, such as signal voids. Spectrally selective navigators can come in handy in cases where the imaging organ is surrounding with an adipose tissue, because it can provide an indirect measure of organ motion. A novel spectrally selective navigator based on a crossed-pair navigator has been developed. Experiments show the advantages of the application of this novel navigator for the volumetric imaging of the liver in vivo, where this navigator was used to gate the gradient-recalled echo sequence

    A Composite Material-based Computational Model for Diaphragm Muscle Biomechanical Simulation

    Get PDF
    Lung cancer is the most common cause of cancer related death among both men and women. Radiation therapy is the most widely used treatment for this disease. Motion compensation for tumor movement is often clinically important and biomechanics-based motion models may provide the most robust method as they are based on the physics of motion. In this study, we aim to develop a patient specific biomechanical model that predicts the deformation field of the diaphragm muscle during respiration. The first part of the project involved developing an accurate and adaptable micro-to-macro mechanical approach for skeletal muscle tissue modelling for application in a FE solver. The next objective was to develop the FE-based mechanical model of the diaphragm muscle based on patient specific 4D-CT data. The model shows adaptability to pathologies and may have the potential to be incorporated into respiratory models for the aid in treatment and diagnosis of diseases

    A hybrid patient-specific biomechanical model based image registration method for the motion estimation of lungs

    Get PDF
    This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson’s ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    Inverse-Consistent Determination of Young\u27s Modulus of Human Lung

    Get PDF
    Human lung undergoes respiration-induced deformation due to sequential inhalation and exhalation. Accurate determination of lung deformation is crucial for tumor localization and targeted radiotherapy in patients with lung cancer. Numerical modeling of human lung dynamics based on underlying physics and physiology enables simulation and virtual visualization of lung deformation. Dynamical modeling is numerically complicated by the lack of information on lung elastic behavior, structural heterogeneity as well as boundary constrains. This study integrates physics-based modeling and image-based data acquisition to develop the patient-specific biomechanical model and consequently establish the first consistent Young\u27s modulus (YM) of human lung. This dissertation has four major components: (i) develop biomechanical model for computation of the flow and deformation characteristics that can utilize subject-specific, spatially-dependent lung material property; (ii) develop a fusion algorithm to integrate deformation results from a deformable image registration (DIR) and physics-based modeling using the theory of Tikhonov regularization; (iii) utilize fusion algorithm to establish unique and consistent patient specific Young\u27s modulus and; (iv) validate biomechanical model utilizing established patient-specific elastic property with imaging data. The simulation is performed on three dimensional lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of human subjects. The heterogeneous Young\u27s modulus is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The biomechanical model adequately predicts the spatio-temporal lung deformation, consistent with data obtained from imaging. The accuracy of the numerical solution is enhanced through fusion with the imaging data beyond the classical comparison of the two sets of data. Finally, the fused displacement results are used to establish unique and consistent patient-specific elastic property of the lung

    Non-rigid medical image registration with extended free form deformations: modelling general tissue transitions

    Get PDF
    Image registration seeks pointwise correspondences between the same or analogous objects in different images. Conventional registration methods generally impose continuity and smoothness throughout the image. However, there are cases in which the deformations may involve discontinuities. In general, the discontinuities can be of different types, depending on the physical properties of the tissue transitions involved and boundary conditions. For instance, in the respiratory motion the lungs slide along the thoracic cage following the tangential direction of their interface. In the normal direction, however, the lungs and the thoracic cage are constrained to be always in contact but they have different material properties producing different compression or expansion rates. In the literature, there is no generic method, which handles different types of discontinuities and considers their directional dependence. The aim of this thesis is to develop a general registration framework that is able to correctly model different types of tissue transitions with a general formalism. This has led to the development of the eXtended Free Form Deformation (XFFD) registration method. XFFD borrows the concept of the interpolation method from the eXtended Finite Element method (XFEM) to incorporate discontinuities by enriching B-spline basis functions, coupled with extra degrees of freedom. XFFD can handle different types of discontinuities and encodes their directional-dependence without any additional constraints. XFFD has been evaluated on digital phantoms, publicly available 3D liver and lung CT images. The experiments show that XFFD improves on previous methods and that it is important to employ the correct model that corresponds to the discontinuity type involved at the tissue transition. The effect of using incorrect models is more evident in the strain, which measures mechanical properties of the tissues

    Real-time intrafraction motion monitoring in external beam radiotherapy

    Get PDF
    © 2019 Institute of Physics and Engineering in Medicine. Radiotherapy (RT) aims to deliver a spatially conformal dose of radiation to tumours while maximizing the dose sparing to healthy tissues. However, the internal patient anatomy is constantly moving due to respiratory, cardiac, gastrointestinal and urinary activity. The long term goal of the RT community to 'see what we treat, as we treat' and to act on this information instantaneously has resulted in rapid technological innovation. Specialized treatment machines, such as robotic or gimbal-steered linear accelerators (linac) with in-room imaging suites, have been developed specifically for real-time treatment adaptation. Additional equipment, such as stereoscopic kilovoltage (kV) imaging, ultrasound transducers and electromagnetic transponders, has been developed for intrafraction motion monitoring on conventional linacs. Magnetic resonance imaging (MRI) has been integrated with cobalt treatment units and more recently with linacs. In addition to hardware innovation, software development has played a substantial role in the development of motion monitoring methods based on respiratory motion surrogates and planar kV or Megavoltage (MV) imaging that is available on standard equipped linacs. In this paper, we review and compare the different intrafraction motion monitoring methods proposed in the literature and demonstrated in real-time on clinical data as well as their possible future developments. We then discuss general considerations on validation and quality assurance for clinical implementation. Besides photon RT, particle therapy is increasingly used to treat moving targets. However, transferring motion monitoring technologies from linacs to particle beam lines presents substantial challenges. Lessons learned from the implementation of real-time intrafraction monitoring for photon RT will be used as a basis to discuss the implementation of these methods for particle RT

    Mechatronics design and control of radiotherapy phantom

    Get PDF

    Characterisation and correction of respiratory-motion artefacts in cardiac PET-CT

    Get PDF
    Respiratory motion during cardiac Positron Emission Tomography (PET) Computed Tomography (CT) imaging results in blurring of the PET data and can induce mismatches between the PET and CT datasets, leading to attenuation-correction artefacts. The aim of this project was to develop a method of motion-correction to overcome both of these problems. The approach implemented was to transform a single CT to match the frames of a gated PET study, to facilitate respiratory-matched attenuation-correction, without the need for a gated CT. This is benecial for lowering the radiation dose to the patient and in reducing PETCT mismatches, which can arise even in gated studies. The heart and diaphragm were identied through phantom studies as the structures responsible for generating attenuation-correction artefacts in the heart and their motions therefore needed to be considered in transforming the CT. Estimating heart motion was straight-forward, due to its high contrast in PET, however the poor diaphragm contrast meant that additional information was required to track its position. Therefore a diaphragm shape model was constructed using segmented diaphragm surfaces, enabling complete diaphragm surfaces to be produced from incomplete and noisy initial estimates. These complete surfaces, in combination with the estimated heart motions were used to transform the CT. The PET frames were then attenuation-corrected with the transformed CT, reconstructed, aligned and summed, to produce motion-free images. It was found that motion-blurring was reduced through alignment, although benets were marginal in the presence of small respiratory motions. Quantitative accuracy was improved from use of the transformed CT for attenuation-correction (compared with no CT transformation), which was attributed to both the heart and the diaphragm transformations. In comparison to a gated CT, a substantial dose saving and a reduced dependence on gating techniques were achieved, indicating the potential value of the technique in routine clinical procedures

    Modeling, Simulation, And Visualization Of 3d Lung Dynamics

    Get PDF
    Medical simulation has facilitated the understanding of complex biological phenomenon through its inherent explanatory power. It is a critical component for planning clinical interventions and analyzing its effect on a human subject. The success of medical simulation is evidenced by the fact that over one third of all medical schools in the United States augment their teaching curricula using patient simulators. Medical simulators present combat medics and emergency providers with video-based descriptions of patient symptoms along with step-by-step instructions on clinical procedures that alleviate the patient\u27s condition. Recent advances in clinical imaging technology have led to an effective medical visualization by coupling medical simulations with patient-specific anatomical models and their physically and physiologically realistic organ deformation. 3D physically-based deformable lung models obtained from a human subject are tools for representing regional lung structure and function analysis. Static imaging techniques such as Magnetic Resonance Imaging (MRI), Chest x-rays, and Computed Tomography (CT) are conventionally used to estimate the extent of pulmonary disease and to establish available courses for clinical intervention. The predictive accuracy and evaluative strength of the static imaging techniques may be augmented by improved computer technologies and graphical rendering techniques that can transform these static images into dynamic representations of subject specific organ deformations. By creating physically based 3D simulation and visualization, 3D deformable models obtained from subject-specific lung images will better represent lung structure and function. Variations in overall lung deformations may indicate tissue pathologies, thus 3D visualization of functioning lungs may also provide a visual tool to current diagnostic methods. The feasibility of medical visualization using static 3D lungs as an effective tool for endotracheal intubation was previously shown using Augmented Reality (AR) based techniques in one of the several research efforts at the Optical Diagnostics and Applications Laboratory (ODALAB). This research effort also shed light on the potential usage of coupling such medical visualization with dynamic 3D lungs. The purpose of this dissertation is to develop 3D deformable lung models, which are developed from subject-specific high resolution CT data and can be visualized using the AR based environment. A review of the literature illustrates that the techniques for modeling real-time 3D lung dynamics can be roughly grouped into two categories: Geometrically-based and Physically-based. Additional classifications would include considering a 3D lung model as either a volumetric or surface model, modeling the lungs as either a single-compartment or a multi-compartment, modeling either the air-blood interaction or the air-blood-tissue interaction, and considering either a normal or pathophysical behavior of lungs. Validating the simulated lung dynamics is a complex problem and has been previously approached by tracking a set of landmarks on the CT images. An area that needs to be explored is the relationship between the choice of the deformation method for the 3D lung dynamics and its visualization framework. Constraints on the choice of the deformation method and the 3D model resolution arise from the visualization framework. Such constraints of our interest are the real-time requirement and the level of interaction required with the 3D lung models. The work presented here discusses a framework that facilitates a physics-based and physiology-based deformation of a single-compartment surface lung model that maintains the frame-rate requirements of the visualization system. The framework presented here is part of several research efforts at ODALab for developing an AR based medical visualization framework. The framework consists of 3 components, (i) modeling the Pressure-Volume (PV) relation, (ii) modeling the lung deformation using a Green\u27s function based deformation operator, and (iii) optimizing the deformation using state-of-art Graphics Processing Units (GPU). The validation of the results obtained in the first two modeling steps is also discussed for normal human subjects. Disease states such as Pneumothorax and lung tumors are modeled using the proposed deformation method. Additionally, a method to synchronize the instantiations of the deformation across a network is also discussed
    • …
    corecore