69 research outputs found

    LoMAE: Low-level Vision Masked Autoencoders for Low-dose CT Denoising

    Full text link
    Low-dose computed tomography (LDCT) offers reduced X-ray radiation exposure but at the cost of compromised image quality, characterized by increased noise and artifacts. Recently, transformer models emerged as a promising avenue to enhance LDCT image quality. However, the success of such models relies on a large amount of paired noisy and clean images, which are often scarce in clinical settings. In the fields of computer vision and natural language processing, masked autoencoders (MAE) have been recognized as an effective label-free self-pretraining method for transformers, due to their exceptional feature representation ability. However, the original pretraining and fine-tuning design fails to work in low-level vision tasks like denoising. In response to this challenge, we redesign the classical encoder-decoder learning model and facilitate a simple yet effective low-level vision MAE, referred to as LoMAE, tailored to address the LDCT denoising problem. Moreover, we introduce an MAE-GradCAM method to shed light on the latent learning mechanisms of the MAE/LoMAE. Additionally, we explore the LoMAE's robustness and generability across a variety of noise levels. Experiments results show that the proposed LoMAE can enhance the transformer's denoising performance and greatly relieve the dependence on the ground truth clean data. It also demonstrates remarkable robustness and generalizability over a spectrum of noise levels

    Trainable Joint Bilateral Filters for Enhanced Prediction Stability in Low-dose CT

    Full text link
    Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning~(DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding two well-established DL-based denoisers (RED-CNN/QAE) in our pipeline, the denoising performance is improved by 10%10\,\%/82%82\,\% (RMSE) and 3%3\,\%/81%81\,\% (PSNR) in regions containing metal and by 6%6\,\%/78%78\,\% (RMSE) and 2%2\,\%/4%4\,\% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines

    Quantitative Image Reconstruction Methods for Low Signal-To-Noise Ratio Emission Tomography

    Full text link
    Novel internal radionuclide therapies such as radioembolization (RE) with Y-90 loaded microspheres and targeted therapies labeled with Lu-177 offer a unique promise for personalized treatment of cancer because imaging-based pre-treatment dosimetry assessment can be used to determine administered activities, which deliver tumoricidal absorbed doses to lesions while sparing critical organs. At present, however, such therapies are administered with fixed or empiric activities with little or no dosimetry planning. The main reason for lack of dosimetry guided personalized treatment in radionuclide therapies is the challenges and impracticality of quantitative emission tomography imaging and the lack of well established dose-effect relationships, potentially due to inaccuracies in quantitative imaging. While radionuclides for therapy have been chosen for their attractive characteristics for cancer treatment, their suitability for emission tomography imaging is less than ideal. For example, imaging of the almost pure beta emitter, Y-90, involves SPECT via bremsstrahlung photons that have a low and tissue dependent yield or PET via a very low abundance positron emission (32 out of 1 million decays) that leads to a very low true coincidence-rate in the presence of high singles events from bremsstrahlung photons. Lu-177 emits gamma-rays suitable for SPECT, but they are low in intensity (113 keV: 6%, 208 keV: 10%), and only the higher energy emission is generally used because of the large downscatter component associated with the lower energy gamma-ray. The main aim of the research in this thesis is to improve accuracy of quantitative PET and SPECT imaging of therapy radionuclides for dosimetry applications. Although PET is generally considered as superior to SPECT for quantitative imaging, PET imaging of `non-pure' positron emitters can be complex. We focus on quantitative SPECT and PET imaging of two widely used therapy radionuclides, Lu-177 and Y-90, that have challenges associated with low count-rates. The long term goal of our work is to apply the methods we develop to patient imaging for dosimetry based planning to optimize the treatment either before therapy or after each cycle of therapy. For Y-90 PET/CT, we developed an image reconstruction formulation that relaxes the conventional image-domain nonnegativity constraint by instead imposing a positivity constraint on the predicted measurement mean that demonstrated improved quantification in simulated patient studies. For Y-90 SPECT/CT, we propose a new SPECT/CT reconstruction formulation including tissue dependent probabilities for bremsstrahlung generation in the system matrix. In addition to above mentioned quantitative image reconstruction methods specifically developed for each modality in Y-90 imaging, we propose a general image reconstruction method using trained regularizer for low-count PET and SPECT that we test on Y-90 and Lu-177 imaging. Our approach starts with the raw projection data and utilizes trained networks in the iterative image formation process. Specifically, we take a mathematics-based approach where we include convolutional neural networks within the iterative reconstruction process arising from an optimization problem. We further extend the trained regularization method by using anatomical side information. The trained regularizer incorporates the anatomical information using the segmentation mask generated by a trained segmentation network where its input is the co-registered CT image. Overall, the emission tomography methods we have proposed in this work are expected to enhance low-count PET and SPECT imaging of therapy radionuclides in patient studies, which will have value in establishing dose – response relationships and developing imaging based dosimetry guided treatment planning strategies in the future.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155171/1/hongki_1.pd

    Artificial intelligence for reducing the radiation burden of medical imaging for the diagnosis of coronavirus disease.

    Get PDF
    Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health

    Ultralow‐parameter denoising: trainable bilateral filter layers in computed tomography

    Get PDF
    Background Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. Purpose Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. Methods This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. Results Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. Conclusions Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures
    corecore