18 research outputs found

    Denoising method for dynamic contrast-enhanced CT perfusion studies using three-dimensional deep image prior as a simultaneous spatial and temporal regularizer

    Full text link
    This study aimed to propose a denoising method for dynamic contrast-enhanced computed tomography (DCE-CT) perfusion studies using a three-dimensional deep image prior (DIP), and to investigate its usefulness in comparison with total variation (TV)-based methods with different regularization parameter (alpha) values through simulation studies. In the proposed DIP method, the DIP was incorporated into the constrained optimization problem for image denoising as a simultaneous spatial and temporal regularizer, which was solved using the alternating direction method of multipliers. In the simulation studies, DCE-CT images were generated using a digital brain phantom and their noise level was varied using the X-ray exposure noise model with different exposures (15, 30, 50, 75, and 100 mAs). Cerebral blood flow (CBF) images were generated from the original contrast enhancement (CE) images and those obtained by the DIP and TV methods using block-circulant singular value decomposition. The quality of the CE images was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). To compare the CBF images obtained by the different methods and those generated from the ground truth images, linear regression analysis was performed. When using the DIP method, the PSNR and SSIM were not significantly dependent on the exposure, and the SSIM was the highest for all exposures. When using the TV methods, they were significantly dependent on the exposure and alpha values. The results of the linear regression analysis suggested that the linearity of the CBF images obtained by the DIP method was superior to those obtained from the original CE images and by the TV methods. Our preliminary results suggest that the DIP method is useful for denoising DCE-CT images at ultra-low to low exposures and for improving the accuracy of the CBF images generated from them

    Robust Depth Linear Error Decomposition with Double Total Variation and Nuclear Norm for Dynamic MRI Reconstruction

    Full text link
    Compressed Sensing (CS) significantly speeds up Magnetic Resonance Image (MRI) processing and achieves accurate MRI reconstruction from under-sampled k-space data. According to the current research, there are still several problems with dynamic MRI k-space reconstruction based on CS. 1) There are differences between the Fourier domain and the Image domain, and the differences between MRI processing of different domains need to be considered. 2) As three-dimensional data, dynamic MRI has its spatial-temporal characteristics, which need to calculate the difference and consistency of surface textures while preserving structural integrity and uniqueness. 3) Dynamic MRI reconstruction is time-consuming and computationally resource-dependent. In this paper, we propose a novel robust low-rank dynamic MRI reconstruction optimization model via highly under-sampled and Discrete Fourier Transform (DFT) called the Robust Depth Linear Error Decomposition Model (RDLEDM). Our method mainly includes linear decomposition, double Total Variation (TV), and double Nuclear Norm (NN) regularizations. By adding linear image domain error analysis, the noise is reduced after under-sampled and DFT processing, and the anti-interference ability of the algorithm is enhanced. Double TV and NN regularizations can utilize both spatial-temporal characteristics and explore the complementary relationship between different dimensions in dynamic MRI sequences. In addition, Due to the non-smoothness and non-convexity of TV and NN terms, it is difficult to optimize the unified objective model. To address this issue, we utilize a fast algorithm by solving a primal-dual form of the original problem. Compared with five state-of-the-art methods, extensive experiments on dynamic MRI data demonstrate the superior performance of the proposed method in terms of both reconstruction accuracy and time complexity

    Compound Attention and Neighbor Matching Network for Multi-contrast MRI Super-resolution

    Full text link
    Multi-contrast magnetic resonance imaging (MRI) reflects information about human tissue from different perspectives and has many clinical applications. By utilizing the complementary information among different modalities, multi-contrast super-resolution (SR) of MRI can achieve better results than single-image super-resolution. However, existing methods of multi-contrast MRI SR have the following shortcomings that may limit their performance: First, existing methods either simply concatenate the reference and degraded features or exploit global feature-matching between them, which are unsuitable for multi-contrast MRI SR. Second, although many recent methods employ transformers to capture long-range dependencies in the spatial dimension, they neglect that self-attention in the channel dimension is also important for low-level vision tasks. To address these shortcomings, we proposed a novel network architecture with compound-attention and neighbor matching (CANM-Net) for multi-contrast MRI SR: The compound self-attention mechanism effectively captures the dependencies in both spatial and channel dimension; the neighborhood-based feature-matching modules are exploited to match degraded features and adjacent reference features and then fuse them to obtain the high-quality images. We conduct experiments of SR tasks on the IXI, fastMRI, and real-world scanning datasets. The CANM-Net outperforms state-of-the-art approaches in both retrospective and prospective experiments. Moreover, the robustness study in our work shows that the CANM-Net still achieves good performance when the reference and degraded images are imperfectly registered, proving good potential in clinical applications.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    corecore