1,954 research outputs found

    A Novel Loss Function Incorporating Imaging Acquisition Physics for PET Attenuation Map Generation using Deep Learning

    Full text link
    In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch between CT and PET due to patient body motion results in AC artifacts. In addition, artifact caused by metal, beam-hardening and count-starving in CT itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction of activity and attenuation (MLAA) was proposed to solve those issues by simultaneously reconstructing tracer activity (λ\lambda-MLAA) and attenuation map (μ\mu-MLAA) based on the PET raw data only. However, μ\mu-MLAA suffers from high noise and λ\lambda-MLAA suffers from large bias as compared to the reconstruction using the CT-based attenuation map (μ\mu-CT). Recently, a convolutional neural network (CNN) was applied to predict the CT attenuation map (μ\mu-CNN) from λ\lambda-MLAA and μ\mu-MLAA, in which an image-domain loss (IM-loss) function between the μ\mu-CNN and the ground truth μ\mu-CT was used. However, IM-loss does not directly measure the AC errors according to the PET attenuation physics, where the line-integral projection of the attenuation map (μ\mu) along the path of the two annihilation events, instead of the μ\mu itself, is used for AC. Therefore, a network trained with the IM-loss may yield suboptimal performance in the μ\mu generation. Here, we propose a novel line-integral projection loss (LIP-loss) function that incorporates the PET attenuation physics for μ\mu generation. Eighty training and twenty testing datasets of whole-body 18F-FDG PET and paired ground truth μ\mu-CT were used. Quantitative evaluations showed that the model trained with the additional LIP-loss was able to significantly outperform the model trained solely based on the IM-loss function.Comment: Accepted at MICCAI 201

    Deep MR to CT Synthesis for PET/MR Attenuation Correction

    Get PDF
    Positron Emission Tomography - Magnetic Resonance (PET/MR) imaging combines the functional information from PET with the flexibility of MR imaging. It is essential, however, to correct for photon attenuation when reconstructing PETs, which is challenging for PET/MR as neither modality directly image tissue attenuation properties. Classical MR-based computed tomography (CT) synthesis methods, such as multi-atlas propagation, have been the method of choice for PET attenuation correction (AC), however, these methods are slow and suffer from the poor ability to handle anatomical abnormalities. To overcome this limitation, this thesis explores the rising field of artificial intelligence in order to develop novel methods for PET/MR AC. Deep learning-based synthesis methods such as the standard U-Net architecture are not very stable, accurate, and robust to small variations in image appearance. Thus, the first proposed MR to CT synthesis method deploys a boosting strategy, where multiple weak predictors build a strong predictor providing a significant improvement in CT and PET reconstruction accuracy. Standard deep learning-based methods as well as more advanced methods like the first proposed method show issues in the presence of very complex imaging environments and large images such as whole-body images. The second proposed method learns the image context between whole-body MRs and CTs through multiple resolutions while simultaneously modelling uncertainty. Lastly, as the purpose of synthesizing a CT is to better reconstruct PET data, the use of CT-based loss functions is questioned within this thesis. Such losses fail to recognize the main objective of MR-based AC, which is to generate a synthetic CT that, when used for PET AC, makes the reconstructed PET as close as possible to the gold standard PET. The third proposed method introduces a novel PET-based loss that minimizes CT residuals with respect to the PET reconstruction

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure

    Quantitative Image Reconstruction Methods for Low Signal-To-Noise Ratio Emission Tomography

    Full text link
    Novel internal radionuclide therapies such as radioembolization (RE) with Y-90 loaded microspheres and targeted therapies labeled with Lu-177 offer a unique promise for personalized treatment of cancer because imaging-based pre-treatment dosimetry assessment can be used to determine administered activities, which deliver tumoricidal absorbed doses to lesions while sparing critical organs. At present, however, such therapies are administered with fixed or empiric activities with little or no dosimetry planning. The main reason for lack of dosimetry guided personalized treatment in radionuclide therapies is the challenges and impracticality of quantitative emission tomography imaging and the lack of well established dose-effect relationships, potentially due to inaccuracies in quantitative imaging. While radionuclides for therapy have been chosen for their attractive characteristics for cancer treatment, their suitability for emission tomography imaging is less than ideal. For example, imaging of the almost pure beta emitter, Y-90, involves SPECT via bremsstrahlung photons that have a low and tissue dependent yield or PET via a very low abundance positron emission (32 out of 1 million decays) that leads to a very low true coincidence-rate in the presence of high singles events from bremsstrahlung photons. Lu-177 emits gamma-rays suitable for SPECT, but they are low in intensity (113 keV: 6%, 208 keV: 10%), and only the higher energy emission is generally used because of the large downscatter component associated with the lower energy gamma-ray. The main aim of the research in this thesis is to improve accuracy of quantitative PET and SPECT imaging of therapy radionuclides for dosimetry applications. Although PET is generally considered as superior to SPECT for quantitative imaging, PET imaging of `non-pure' positron emitters can be complex. We focus on quantitative SPECT and PET imaging of two widely used therapy radionuclides, Lu-177 and Y-90, that have challenges associated with low count-rates. The long term goal of our work is to apply the methods we develop to patient imaging for dosimetry based planning to optimize the treatment either before therapy or after each cycle of therapy. For Y-90 PET/CT, we developed an image reconstruction formulation that relaxes the conventional image-domain nonnegativity constraint by instead imposing a positivity constraint on the predicted measurement mean that demonstrated improved quantification in simulated patient studies. For Y-90 SPECT/CT, we propose a new SPECT/CT reconstruction formulation including tissue dependent probabilities for bremsstrahlung generation in the system matrix. In addition to above mentioned quantitative image reconstruction methods specifically developed for each modality in Y-90 imaging, we propose a general image reconstruction method using trained regularizer for low-count PET and SPECT that we test on Y-90 and Lu-177 imaging. Our approach starts with the raw projection data and utilizes trained networks in the iterative image formation process. Specifically, we take a mathematics-based approach where we include convolutional neural networks within the iterative reconstruction process arising from an optimization problem. We further extend the trained regularization method by using anatomical side information. The trained regularizer incorporates the anatomical information using the segmentation mask generated by a trained segmentation network where its input is the co-registered CT image. Overall, the emission tomography methods we have proposed in this work are expected to enhance low-count PET and SPECT imaging of therapy radionuclides in patient studies, which will have value in establishing dose – response relationships and developing imaging based dosimetry guided treatment planning strategies in the future.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155171/1/hongki_1.pd

    What scans we will read: imaging instrumentation trends in clinical oncology

    Get PDF
    Oncological diseases account for a significant portion of the burden on public healthcare systems with associated costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non- invasively, so as to provide referring oncologists with essential information to support therapy management decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/ CT), advanced MRI, optical or ultrasound imaging. This perspective paper highlights a number of key technological and methodological advances in imaging instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as the hardware-based combination of complementary anatomical and molecular imaging. These include novel detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging in oncology patient management we introduce imaging methods with well-defined clinical applications and potential for clinical translation. For each modality, we report first on the status quo and point to perceived technological and methodological advances in a subsequent status go section. Considering the breadth and dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the majority of them being imaging experts with a background in physics and engineering, believe imaging methods will be in a few years from now. Overall, methodological and technological medical imaging advances are geared towards increased image contrast, the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is complemented by progress in relevant acquisition and image-processing protocols and improved data analysis. To this end, we should accept diagnostic images as “data”, and – through the wider adoption of advanced analysis, including machine learning approaches and a “big data” concept – move to the next stage of non-invasive tumor phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi- dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging

    Full-dose PET Synthesis from Low-dose PET Using High-efficiency Diffusion Denoising Probabilistic Model

    Full text link
    To reduce the risks associated with ionizing radiation, a reduction of radiation exposure in PET imaging is needed. However, this leads to a detrimental effect on image contrast and quantification. High-quality PET images synthesized from low-dose data offer a solution to reduce radiation exposure. We introduce a diffusion-model-based approach for estimating full-dose PET images from low-dose ones: the PET Consistency Model (PET-CM) yielding synthetic quality comparable to state-of-the-art diffusion-based synthesis models, but with greater efficiency. There are two steps: a forward process that adds Gaussian noise to a full dose PET image at multiple timesteps, and a reverse diffusion process that employs a PET Shifted-window Vision Transformer (PET-VIT) network to learn the denoising procedure conditioned on the corresponding low-dose PETs. In PET-CM, the reverse process learns a consistency function for direct denoising of Gaussian noise to a clean full-dose PET. We evaluated the PET-CM in generating full-dose images using only 1/8 and 1/4 of the standard PET dose. Comparing 1/8 dose to full-dose images, PET-CM demonstrated impressive performance with normalized mean absolute error (NMAE) of 1.233+/-0.131%, peak signal-to-noise ratio (PSNR) of 33.915+/-0.933dB, structural similarity index (SSIM) of 0.964+/-0.009, and normalized cross-correlation (NCC) of 0.968+/-0.011, with an average generation time of 62 seconds per patient. This is a significant improvement compared to the state-of-the-art diffusion-based model with PET-CM reaching this result 12x faster. In the 1/4 dose to full-dose image experiments, PET-CM is also competitive, achieving an NMAE 1.058+/-0.092%, PSNR of 35.548+/-0.805dB, SSIM of 0.978+/-0.005, and NCC 0.981+/-0.007 The results indicate promising low-dose PET image quality improvements for clinical applications
    corecore