1,954 research outputs found
A Novel Loss Function Incorporating Imaging Acquisition Physics for PET Attenuation Map Generation using Deep Learning
In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch
between CT and PET due to patient body motion results in AC artifacts. In
addition, artifact caused by metal, beam-hardening and count-starving in CT
itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction
of activity and attenuation (MLAA) was proposed to solve those issues by
simultaneously reconstructing tracer activity (-MLAA) and attenuation
map (-MLAA) based on the PET raw data only. However, -MLAA suffers
from high noise and -MLAA suffers from large bias as compared to the
reconstruction using the CT-based attenuation map (-CT). Recently, a
convolutional neural network (CNN) was applied to predict the CT attenuation
map (-CNN) from -MLAA and -MLAA, in which an image-domain
loss (IM-loss) function between the -CNN and the ground truth -CT was
used. However, IM-loss does not directly measure the AC errors according to the
PET attenuation physics, where the line-integral projection of the attenuation
map () along the path of the two annihilation events, instead of the
itself, is used for AC. Therefore, a network trained with the IM-loss may yield
suboptimal performance in the generation. Here, we propose a novel
line-integral projection loss (LIP-loss) function that incorporates the PET
attenuation physics for generation. Eighty training and twenty testing
datasets of whole-body 18F-FDG PET and paired ground truth -CT were used.
Quantitative evaluations showed that the model trained with the additional
LIP-loss was able to significantly outperform the model trained solely based on
the IM-loss function.Comment: Accepted at MICCAI 201
Deep MR to CT Synthesis for PET/MR Attenuation Correction
Positron Emission Tomography - Magnetic Resonance (PET/MR) imaging combines the functional information from PET with the flexibility of MR imaging. It is essential, however, to correct for photon attenuation when reconstructing PETs, which is challenging for PET/MR as neither modality directly image tissue attenuation properties. Classical MR-based computed tomography (CT) synthesis methods, such as multi-atlas propagation, have been the method of choice for PET attenuation correction (AC), however, these methods are slow and suffer from the poor ability to handle anatomical abnormalities. To overcome this limitation, this thesis explores the rising field of artificial intelligence in order to develop novel methods for PET/MR AC. Deep learning-based synthesis methods such as the standard U-Net architecture are not very stable, accurate, and robust to small variations in image appearance. Thus, the first proposed MR to CT synthesis method deploys a boosting strategy, where multiple weak predictors build a strong predictor providing a significant improvement in CT and PET reconstruction accuracy. Standard deep learning-based methods as well as more advanced methods like the first proposed method show issues in the presence of very complex imaging environments and large images such as whole-body images. The second proposed method learns the image context between whole-body MRs and CTs through multiple resolutions while simultaneously modelling uncertainty. Lastly, as the purpose of synthesizing a CT is to better reconstruct PET data, the use of CT-based loss functions is questioned within this thesis. Such losses fail to recognize the main objective of MR-based AC, which is to generate a synthetic CT that, when used for PET AC, makes the reconstructed PET as close as possible to the gold standard PET. The third proposed method introduces a novel PET-based loss that minimizes CT residuals with respect to the PET reconstruction
MedGAN: Medical Image Translation using GANs
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.Comment: 16 pages, 8 figure
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
Quantitative Image Reconstruction Methods for Low Signal-To-Noise Ratio Emission Tomography
Novel internal radionuclide therapies such as radioembolization (RE) with Y-90 loaded microspheres and targeted therapies labeled with Lu-177 offer a unique promise for personalized treatment of cancer because imaging-based pre-treatment dosimetry assessment can be used to determine administered activities, which deliver tumoricidal absorbed doses to lesions while sparing critical organs. At present, however, such therapies are administered with fixed or empiric activities with little or no dosimetry planning. The main reason for lack of dosimetry guided personalized treatment in radionuclide therapies is the challenges and impracticality of quantitative emission tomography imaging and the lack of well established dose-effect relationships, potentially due to inaccuracies in quantitative imaging. While radionuclides for therapy have been chosen for their attractive characteristics for cancer treatment, their suitability for emission tomography imaging is less than ideal. For example, imaging of the almost pure beta emitter, Y-90, involves SPECT via bremsstrahlung photons that have a low and tissue dependent yield or PET via a very low abundance positron emission (32 out of 1 million decays) that leads to a very low true coincidence-rate in the presence of high singles events from bremsstrahlung photons. Lu-177 emits gamma-rays suitable for SPECT, but they are low in intensity (113 keV: 6%, 208 keV: 10%), and only the higher energy emission is generally used because of the large downscatter component associated with the lower energy gamma-ray.
The main aim of the research in this thesis is to improve accuracy of quantitative PET and SPECT imaging of therapy radionuclides for dosimetry applications. Although PET is generally considered as superior to SPECT for quantitative imaging, PET imaging of `non-pure' positron emitters can be complex. We focus on quantitative SPECT and PET imaging of two widely used therapy radionuclides, Lu-177 and Y-90, that have challenges associated with low count-rates. The long term goal of our work is to apply the methods we develop to patient imaging for dosimetry based planning to optimize the treatment either before therapy or after each cycle of therapy. For Y-90 PET/CT, we developed an image reconstruction formulation that relaxes the conventional image-domain nonnegativity constraint by instead imposing a positivity constraint on the predicted measurement mean that demonstrated improved quantification in simulated patient studies. For Y-90 SPECT/CT, we propose a new SPECT/CT reconstruction formulation including tissue dependent probabilities for bremsstrahlung generation in the system matrix.
In addition to above mentioned quantitative image reconstruction methods specifically developed for each modality in Y-90 imaging, we propose a general image reconstruction method using trained regularizer for low-count PET and SPECT that we test on Y-90 and Lu-177 imaging. Our approach starts with the raw projection data and utilizes trained networks in the iterative image formation process. Specifically, we take a mathematics-based approach where we include convolutional neural networks within the iterative reconstruction process arising from an optimization problem. We further extend the trained regularization method by using anatomical side information. The trained regularizer incorporates the anatomical information using the segmentation mask generated by a trained segmentation network where its input is the co-registered CT image. Overall, the emission tomography methods we have proposed in this work are expected to enhance low-count PET and SPECT imaging of therapy radionuclides in patient studies, which will have value in establishing dose – response relationships and developing imaging based dosimetry guided treatment planning strategies in the future.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155171/1/hongki_1.pd
What scans we will read: imaging instrumentation trends in clinical oncology
Oncological diseases account for a significant portion of the burden on public healthcare systems with associated
costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific
morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non-
invasively, so as to provide referring oncologists with essential information to support therapy management
decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards
integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/
CT), advanced MRI, optical or ultrasound imaging.
This perspective paper highlights a number of key technological and methodological advances in imaging
instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as
the hardware-based combination of complementary anatomical and molecular imaging. These include novel
detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system
developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing
methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging
in oncology patient management we introduce imaging methods with well-defined clinical applications and
potential for clinical translation. For each modality, we report first on the status quo and point to perceived
technological and methodological advances in a subsequent status go section. Considering the breadth and
dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the
majority of them being imaging experts with a background in physics and engineering, believe imaging methods
will be in a few years from now.
Overall, methodological and technological medical imaging advances are geared towards increased image contrast,
the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall
examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is
complemented by progress in relevant acquisition and image-processing protocols and improved data analysis. To
this end, we should accept diagnostic images as “data”, and – through the wider adoption of advanced analysis,
including machine learning approaches and a “big data” concept – move to the next stage of non-invasive tumor
phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi-
dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and
analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts
with a domain knowledge that will need to go beyond that of plain imaging
Full-dose PET Synthesis from Low-dose PET Using High-efficiency Diffusion Denoising Probabilistic Model
To reduce the risks associated with ionizing radiation, a reduction of
radiation exposure in PET imaging is needed. However, this leads to a
detrimental effect on image contrast and quantification. High-quality PET
images synthesized from low-dose data offer a solution to reduce radiation
exposure. We introduce a diffusion-model-based approach for estimating
full-dose PET images from low-dose ones: the PET Consistency Model (PET-CM)
yielding synthetic quality comparable to state-of-the-art diffusion-based
synthesis models, but with greater efficiency. There are two steps: a forward
process that adds Gaussian noise to a full dose PET image at multiple
timesteps, and a reverse diffusion process that employs a PET Shifted-window
Vision Transformer (PET-VIT) network to learn the denoising procedure
conditioned on the corresponding low-dose PETs. In PET-CM, the reverse process
learns a consistency function for direct denoising of Gaussian noise to a clean
full-dose PET. We evaluated the PET-CM in generating full-dose images using
only 1/8 and 1/4 of the standard PET dose. Comparing 1/8 dose to full-dose
images, PET-CM demonstrated impressive performance with normalized mean
absolute error (NMAE) of 1.233+/-0.131%, peak signal-to-noise ratio (PSNR) of
33.915+/-0.933dB, structural similarity index (SSIM) of 0.964+/-0.009, and
normalized cross-correlation (NCC) of 0.968+/-0.011, with an average generation
time of 62 seconds per patient. This is a significant improvement compared to
the state-of-the-art diffusion-based model with PET-CM reaching this result 12x
faster. In the 1/4 dose to full-dose image experiments, PET-CM is also
competitive, achieving an NMAE 1.058+/-0.092%, PSNR of 35.548+/-0.805dB, SSIM
of 0.978+/-0.005, and NCC 0.981+/-0.007 The results indicate promising low-dose
PET image quality improvements for clinical applications
- …