18 research outputs found

    A Novel Loss Function Incorporating Imaging Acquisition Physics for PET Attenuation Map Generation using Deep Learning

    Full text link
    In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch between CT and PET due to patient body motion results in AC artifacts. In addition, artifact caused by metal, beam-hardening and count-starving in CT itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction of activity and attenuation (MLAA) was proposed to solve those issues by simultaneously reconstructing tracer activity (λ\lambda-MLAA) and attenuation map (μ\mu-MLAA) based on the PET raw data only. However, μ\mu-MLAA suffers from high noise and λ\lambda-MLAA suffers from large bias as compared to the reconstruction using the CT-based attenuation map (μ\mu-CT). Recently, a convolutional neural network (CNN) was applied to predict the CT attenuation map (μ\mu-CNN) from λ\lambda-MLAA and μ\mu-MLAA, in which an image-domain loss (IM-loss) function between the μ\mu-CNN and the ground truth μ\mu-CT was used. However, IM-loss does not directly measure the AC errors according to the PET attenuation physics, where the line-integral projection of the attenuation map (μ\mu) along the path of the two annihilation events, instead of the μ\mu itself, is used for AC. Therefore, a network trained with the IM-loss may yield suboptimal performance in the μ\mu generation. Here, we propose a novel line-integral projection loss (LIP-loss) function that incorporates the PET attenuation physics for μ\mu generation. Eighty training and twenty testing datasets of whole-body 18F-FDG PET and paired ground truth μ\mu-CT were used. Quantitative evaluations showed that the model trained with the additional LIP-loss was able to significantly outperform the model trained solely based on the IM-loss function.Comment: Accepted at MICCAI 201

    Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients

    Get PDF
    This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods

    TVnet : Automated Time-Resolved Tracking of the Tricuspid Valve Plane in MRI Long-Axis Cine Images with a Dual-Stage Deep Learning Pipeline

    No full text
    Tracking the tricuspid valve (TV) in magnetic resonance imaging (MRI) long-axis cine images has the potential to aid in the evaluation of right ventricular dysfunction, which is common in congenital heart disease and pulmonary hypertension. However, this annotation task remains difficult and time-demanding as the TV moves rapidly and is barely distinguishable from the myocardium. This study presents TVnet, a novel dual-stage deep learning pipeline based on ResNet-50 and automated image linear transformation, able to automatically derive tricuspid annular plane systolic excursion. Stage 1 uses a trained network for a coarse detection of the TV points, which are used by stage 2 to reorient the cine into a standardized size, cropping, resolution, and heart orientation and to accurately locate the TV points with another trained network. The model was trained and evaluated on 4170 images from 140 patients with diverse cardiovascular pathologies. A baseline model without standardization achieved a Euclidean distance error of 4.0 ± 3.1 mm and a clinical-metric agreement of ICC = 0.87, whereas a standardized model improved the agreement to 2.4 ± 1.7 mm and an ICC = 0.94, on par with an evaluated inter-observer variability of 2.9 ± 2.9 mm and an ICC = 0.92, respectively. This novel dual-stage deep learning pipeline substantially improved the annotation accuracy compared to a baseline model, paving the way towards reliable right ventricular dysfunction assessment with MRI

    MRI-TRUS Image Synthesis with Application to Image-Guided Prostate Intervention

    Get PDF
    Accurate and robust fusion of pre-procedure magnetic resonance imaging (MRI) to intra-procedure trans-rectal ultrasound (TRUS) imaging is necessary for image-guided prostate cancer biopsy procedures. The current clinical standard for image fusion relies on non-rigid surface-based registration between semi-automatically segmented prostate surfaces in both the MRI and TRUS. This surface-based registration method does not take advantage of internal anatomical prostate structures, which have the potential to provide useful information for image registration. However, non-rigid, multi-modal intensity-based MRI-TRUS registration is challenging due to highly non-linear intensities relationships between MRI and TRUS. In this paper, we present preliminary work using image synthesis to cast this problem into a mono-modal registration task by using a large database of over 100 clinical MRI-TRUS image pairs to learn a joint model of MR-TRUS appearance. Thus, given an MRI, we use this learned joint appearance model to synthesize the patient’s corresponding TRUS image appearance with which we could potentially perform mono-modal intensity-based registration. We present preliminary results of this approach

    Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT

    Full text link
    Single-photon emission computed tomography (SPECT) is a widely applied imaging approach for diagnosis of coronary artery diseases. Attenuation maps (u-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve diagnostic accuracy of cardiac SPECT. However, SPECT and CT are obtained sequentially in clinical practice, which potentially induces misregistration between the two scans. Convolutional neural networks (CNN) are powerful tools for medical image registration. Previous CNN-based methods for cross-modality registration either directly concatenated two input modalities as an early feature fusion or extracted image features using two separate CNN modules for a late fusion. These methods do not fully extract or fuse the cross-modality information. Besides, deep-learning-based rigid registration of cardiac SPECT and CT-derived u-maps has not been investigated before. In this paper, we propose a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for the registration of cardiac SPECT and CT-derived u-maps. DuSFE fuses the knowledge from multiple modalities to recalibrate both channel-wise and spatial features for each modality. DuSFE can be embedded at multiple convolutional layers to enable feature fusion at different spatial dimensions. Our studies using clinical data demonstrated that a network embedded with DuSFE generated substantial lower registration errors and therefore more accurate AC SPECT images than previous methods.Comment: 10 pages, 4 figures, accepted at MICCAI 202
    corecore