293 research outputs found

    Multi-frame-based Cross-domain Image Denoising for Low-dose Computed Tomography

    Full text link
    Computed tomography (CT) has been used worldwide for decades as one of the most important non-invasive tests in assisting diagnosis. However, the ionizing nature of X-ray exposure raises concerns about potential health risks such as cancer. The desire for lower radiation dose has driven researchers to improve the reconstruction quality, especially by removing noise and artifacts. Although previous studies on low-dose computed tomography (LDCT) denoising have demonstrated the effectiveness of learning-based methods, most of them were developed on the simulated data collected using Radon transform. However, the real-world scenario significantly differs from the simulation domain, and the joint optimization of denoising with modern CT image reconstruction pipeline is still missing. In this paper, for the commercially available third-generation multi-slice spiral CT scanners, we propose a two-stage method that better exploits the complete reconstruction pipeline for LDCT denoising across different domains. Our method makes good use of the high redundancy of both the multi-slice projections and the volumetric reconstructions while avoiding the collapse of information in conventional cascaded frameworks. The dedicated design also provides a clearer interpretation of the workflow. Through extensive evaluations, we demonstrate its superior performance against state-of-the-art methods

    Multi-stage Deep Learning Artifact Reduction for Computed Tomography

    Full text link
    In Computed Tomography (CT), an image of the interior structure of an object is computed from a set of acquired projection images. The quality of these reconstructed images is essential for accurate analysis, but this quality can be degraded by a variety of imaging artifacts. To improve reconstruction quality, the acquired projection images are often processed by a pipeline consisting of multiple artifact-removal steps applied in various image domains (e.g., outlier removal on projection images and denoising of reconstruction images). These artifact-removal methods exploit the fact that certain artifacts are easier to remove in a certain domain compared with other domains. Recently, deep learning methods have shown promising results for artifact removal for CT images. However, most existing deep learning methods for CT are applied as a post-processing method after reconstruction. Therefore, artifacts that are relatively difficult to remove in the reconstruction domain may not be effectively removed by these methods. As an alternative, we propose a multi-stage deep learning method for artifact removal, in which neural networks are applied to several domains, similar to a classical CT processing pipeline. We show that the neural networks can be effectively trained in succession, resulting in easy-to-use and computationally efficient training. Experiments on both simulated and real-world experimental datasets show that our method is effective in reducing artifacts and superior to deep learning-based post-processing

    Trainable Joint Bilateral Filters for Enhanced Prediction Stability in Low-dose CT

    Full text link
    Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning~(DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding two well-established DL-based denoisers (RED-CNN/QAE) in our pipeline, the denoising performance is improved by 10 %10\,\%/82 %82\,\% (RMSE) and 3 %3\,\%/81 %81\,\% (PSNR) in regions containing metal and by 6 %6\,\%/78 %78\,\% (RMSE) and 2 %2\,\%/4 %4\,\% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines

    CT image denoising based on locally adaptive thresholding

    Get PDF
    The noise in reconstructed X-ray Computed Tomography (CT) slices is complex, non-stationary and indefinitely distributed. Subsequent image processing is needed in order to achieve a good-quality medical diagnosis. It requires a sufficiently great ratio between the detailed contrasts and the noise component amplitude. This paper presents an adaptive method for noise reduction in CT images, based on the local statistical evaluation of the noise component in the domain of Repagular Wavelet Transformation (RWT). Considering the spatial dependence of the noise strength, the threshold constant for processing the high frequency coefficients in the proposed shrinkage method is a function of the local standard deviation of the noise for each pixel of the image. Experimental studies have been conducted using different images in order to evaluate the effectiveness of the proposed algorithm

    Denoising Low-Dose CT Images using Multi-frame techniques

    Get PDF
    This study examines potential methods of achieving a reduction in X-ray radiation dose of Computer Tomography (CT) using multi-frame low-dose CT images. Even though a single-frame low-dose CT image is not very diagnostically useful due to excessive noise, we have found that by using multi-frame low-dose CT images we can denoise these low-dose CT images quite significantly at lower radiation dose. We have proposed two approaches leveraging these multi-frame low-dose CT denoising techniques. In our first method, we proposed a blind source separation (BSS) based CT image method using a multiframe low-dose image sequence. By using BSS technique, we estimated the independent image component and noise components from the image sequences. The extracted image component then is further donoised using a nonlocal groupwise denoiser named BM3D that used the mean standard deviation of the noise components. We have also proposed an extension of this method using a window splitting technique. In our second method, we leveraged the power of deep learning to introduce a collaborative technique to train multiple Noise2Noise generators simultaneously and learn the image representation from LDCT images. We presented three models using this Collaborative Network (CN) principle employing two generators (CN2G), three generators (CN3G), and hybrid three generators (HCN3G) consisting of BSS denoiser with one of the CN generators. The CN3G model showed better performance than the CN2G model in terms of denoised image quality at the expense of an additional LDCT image. The HCN3G model took the advantages of both these models by managing to train three collaborative generators using only two LDCT images by leveraging our first proposed method using blind source separation (BSS) and block matching 3-D (BM3D) filter. By using these multi-frame techniques, we can reduce the radiation dosage quite significantly without losing significant image details, especially for low-contrast areas. Amongst our all methods, the HCN3G model performs the best in terms of PSNR, SSIM, and material noise characteristics, while CN2G and CN3G perform better in terms of contrast difference. In HCN3G model, we have combined two of our methods in a single technique. In addition, we have introduced Collaborative Network (CN) and collaborative loss terms in the L2 losses calculation in our second method which is a significant contribution of this research study

    Nest-DGIL: Nesterov-optimized Deep Geometric Incremental Learning for CS Image Reconstruction

    Get PDF
    Proximal gradient-based optimization is one of the most common strategies for solving image inverse problems as well as easy to implement. However, these techniques often generate heavy artifacts in image reconstruction. One of the most popular refinement methods is to fine-tune the regularization parameter to alleviate such artifacts, but it may not always be sufficient or applicable due to increased computational costs. In this work, we propose a deep geometric incremental learning framework based on second Nesterov proximal gradient optimization. The proposed end-to-end network not only has the powerful learning ability for high/low frequency image features,but also can theoretically guarantee that geometric texture details will be reconstructed from preliminary linear reconstruction.Furthermore, it can avoid the risk of intermediate reconstruction results falling outside the geometric decomposition domains and achieve fast convergence. Our reconstruction framework is decomposed into four modules including general linear reconstruction, cascade geometric incremental restoration, Nesterov acceleration and post-processing. In the image restoration step,a cascade geometric incremental learning module is designed to compensate for the missing texture information from different geometric spectral decomposition domains. Inspired by overlap-tile strategy, we also develop a post-processing module to remove the block-effect in patch-wise-based natural image reconstruction. All parameters in the proposed model are learnable,an adaptive initialization technique of physical-parameters is also employed to make model flexibility and ensure converging smoothly. We compare the reconstruction performance of the proposed method with existing state-of-the-art methods to demonstrate its superiority. Our source codes are available at https://github.com/fanxiaohong/Nest-DGIL.Comment: 15 page

    Deep Convolutional Neural Network for Inverse Problems in Imaging

    Get PDF
    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise nonlinearity) when the normal operator (Hβˆ— (H ^{ * } H, where Hβˆ— H ^{ * } is the adjoint of the forward imaging operator, H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 Γ— 512 image on the GPU
    • …
    corecore