5,149 research outputs found

    Deep Learning with Domain Adaptation for Accelerated Projection-Reconstruction MR

    Full text link
    Purpose: The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. Methods: The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of x-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. Results: The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods.Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. Conclusion: We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time.Comment: This paper has been accepted and will soon appear in Magnetic Resonance in Medicin

    Sparse-View X-Ray CT Reconstruction Using β„“1\ell_1 Prior with Learned Transform

    Full text link
    A major challenge in X-ray computed tomography (CT) is reducing radiation dose while maintaining high quality of reconstructed images. To reduce the radiation dose, one can reduce the number of projection views (sparse-view CT); however, it becomes difficult to achieve high-quality image reconstruction as the number of projection views decreases. Researchers have applied the concept of learning sparse representations from (high-quality) CT image dataset to the sparse-view CT reconstruction. We propose a new statistical CT reconstruction model that combines penalized weighted-least squares (PWLS) and β„“1\ell_1 prior with learned sparsifying transform (PWLS-ST-β„“1\ell_1), and a corresponding efficient algorithm based on Alternating Direction Method of Multipliers (ADMM). To moderate the difficulty of tuning ADMM parameters, we propose a new ADMM parameter selection scheme based on approximated condition numbers. We interpret the proposed model by analyzing the minimum mean square error of its (β„“2\ell_2-norm relaxed) image update estimator. Our results with the extended cardiac-torso (XCAT) phantom data and clinical chest data show that, for sparse-view 2D fan-beam CT and 3D axial cone-beam CT, PWLS-ST-β„“1\ell_1 improves the quality of reconstructed images compared to the CT reconstruction methods using edge-preserving regularizer and β„“2\ell_2 prior with learned ST. These results also show that, for sparse-view 2D fan-beam CT, PWLS-ST-β„“1\ell_1 achieves comparable or better image quality and requires much shorter runtime than PWLS-DL using a learned overcomplete dictionary. Our results with clinical chest data show that, methods using the unsupervised learned prior generalize better than a state-of-the-art deep "denoising" neural network that does not use a physical imaging model.Comment: The first two authors contributed equally to this wor

    Convolutional Sparse Coding for Compressed Sensing CT Reconstruction

    Full text link
    Over the past few years, dictionary learning (DL)-based methods have been successfully used in various image reconstruction problems. However, traditional DL-based computed tomography (CT) reconstruction methods are patch-based and ignore the consistency of pixels in overlapped patches. In addition, the features learned by these methods always contain shifted versions of the same features. In recent years, convolutional sparse coding (CSC) has been developed to address these problems. In this paper, inspired by several successful applications of CSC in the field of signal processing, we explore the potential of CSC in sparse-view CT reconstruction. By directly working on the whole image, without the necessity of dividing the image into overlapped patches in DL-based methods, the proposed methods can maintain more details and avoid artifacts caused by patch aggregation. With predetermined filters, an alternating scheme is developed to optimize the objective function. Extensive experiments with simulated and real CT data were performed to validate the effectiveness of the proposed methods. Qualitative and quantitative results demonstrate that the proposed methods achieve better performance than several existing state-of-the-art methods.Comment: Accepted by IEEE TM

    AIR: fused Analytical and Iterative Reconstruction method for computed tomography

    Full text link
    Purpose: CT image reconstruction techniques have two major categories: analytical reconstruction (AR) method and iterative reconstruction (IR) method. AR reconstructs images through analytical formulas, such as filtered backprojection (FBP) in 2D and Feldkamp-Davis-Kress (FDK) method in 3D, which can be either mathematically exact or approximate. On the other hand, IR is often based on the discrete forward model of X-ray transform and formulated as a minimization problem with some appropriate image regularization method, so that the reconstructed image corresponds to the minimizer of the optimization problem. This work is to investigate the fused analytical and iterative reconstruction (AIR) method. Methods: Based on IR with L1-type image regularization, AIR is formulated with a AR-specific preconditioner in the data fidelity term, which results in the minimal change of the solution algorithm that replaces the adjoint X-ray transform by the filtered X-ray transform. As a proof-of-concept 2D example of AIR, FBP is incorporated into tensor framelet (TF) regularization based IR, and the formulated AIR minimization problem is then solved through split Bregman method with GPU-accelerated X-ray transform and filtered adjoint X-ray transform. Conclusion: AIR, the fused Analytical and Iterative Reconstruction method, is proposed with a proof-of-concept 2D example to synergize FBP and TF-regularized IR, with improved image resolution and contrast for experimental data. The potential impact of AIR is that it offers a general framework to develop various AR enhanced IR methods, when neither AR nor IR alone is sufficient

    Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT

    Full text link
    X-ray computed tomography (CT) using sparse projection views is a recent approach to reduce the radiation dose. However, due to the insufficient projection views, an analytic reconstruction approach using the filtered back projection (FBP) produces severe streaking artifacts. Recently, deep learning approaches using large receptive field neural networks such as U-Net have demonstrated impressive performance for sparse- view CT reconstruction. However, theoretical justification is still lacking. Inspired by the recent theory of deep convolutional framelets, the main goal of this paper is, therefore, to reveal the limitation of U-Net and propose new multi-resolution deep learning schemes. In particular, we show that the alternative U- Net variants such as dual frame and the tight frame U-Nets satisfy the so-called frame condition which make them better for effective recovery of high frequency edges in sparse view- CT. Using extensive experiments with real patient data set, we demonstrate that the new network architectures provide better reconstruction performance.Comment: This will appear in IEEE Transaction on Medical Imaging, a special issue of Machine Learning for Image Reconstructio

    Medical image reconstruction: a brief overview of past milestones and future directions

    Full text link
    This paper briefly reviews past milestones in the field of medical image reconstruction and describes some future directions. It is part of an overview paper on "open problems in signal processing" that will appear in IEEE Signal Processing Magazine, but presented here with citations and equations.Comment: Part of a submission to IEEE Signal Processing Magazin

    Deep artifact learning for compressed sensing and parallel MRI

    Full text link
    Purpose: Compressed sensing MRI (CS-MRI) from single and parallel coils is one of the powerful ways to reduce the scan time of MR imaging with performance guarantee. However, the computational costs are usually expensive. This paper aims to propose a computationally fast and accurate deep learning algorithm for the reconstruction of MR images from highly down-sampled k-space data. Theory: Based on the topological analysis, we show that the data manifold of the aliasing artifact is easier to learn from a uniform subsampling pattern with additional low-frequency k-space data. Thus, we develop deep aliasing artifact learning networks for the magnitude and phase images to estimate and remove the aliasing artifacts from highly accelerated MR acquisition. Methods: The aliasing artifacts are directly estimated from the distorted magnitude and phase images reconstructed from subsampled k-space data so that we can get an aliasing-free images by subtracting the estimated aliasing artifact from corrupted inputs. Moreover, to deal with the globally distributed aliasing artifact, we develop a multi-scale deep neural network with a large receptive field. Results: The experimental results confirm that the proposed deep artifact learning network effectively estimates and removes the aliasing artifacts. Compared to existing CS methods from single and multi-coli data, the proposed network shows minimal errors by removing the coherent aliasing artifacts. Furthermore, the computational time is by order of magnitude faster. Conclusion: As the proposed deep artifact learning network immediately generates accurate reconstruction, it has great potential for clinical applications

    Meaning of Interior Tomography

    Full text link
    The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as omni-tomography is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features.Comment: 47 pages, 14 figures, to appear in Physics in Medicine and Biolog

    Tomographic Reconstruction using Global Statistical Prior

    Full text link
    Recent research in tomographic reconstruction is motivated by the need to efficiently recover detailed anatomy from limited measurements. One of the ways to compensate for the increasingly sparse sets of measurements is to exploit the information from templates, i.e., prior data available in the form of already reconstructed, structurally similar images. Towards this, previous work has exploited using a set of global and patch based dictionary priors. In this paper, we propose a global prior to improve both the speed and quality of tomographic reconstruction within a Compressive Sensing framework. We choose a set of potential representative 2D images referred to as templates, to build an eigenspace; this is subsequently used to guide the iterative reconstruction of a similar slice from sparse acquisition data. Our experiments across a diverse range of datasets show that reconstruction using an appropriate global prior, apart from being faster, gives a much lower reconstruction error when compared to the state of the art.Comment: Published in The International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, Australia, 2017. The conference proceedings are not out yet. But the result can be seen here: http://dicta2017.dictaconference.org/fullprogram.htm

    Deep Convolutional Framelets: A General Deep Learning Framework for Inverse Problems

    Full text link
    Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. To address these issues, here we show that the long-searched-for missing link is the convolution framelets for representing a signal by convolving local and non-local bases. The convolution framelets was originally developed to generalize the theory of low-rank Hankel matrix approaches for inverse problems, and this paper further extends the idea so that we can obtain a deep neural network using multilayer convolution framelets with perfect reconstruction (PR) under rectilinear linear unit nonlinearity (ReLU). Our analysis also shows that the popular deep network components such as residual block, redundant filter channels, and concatenated ReLU (CReLU) do indeed help to achieve the PR, while the pooling and unpooling layers should be augmented with high-pass branches to meet the PR condition. Moreover, by changing the number of filter channels and bias, we can control the shrinkage behaviors of the neural network. This discovery leads us to propose a novel theory for deep convolutional framelets neural network. Using numerical experiments with various inverse problems, we demonstrated that our deep convolution framelets network shows consistent improvement over existing deep architectures.This discovery suggests that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis, which is indeed a natural extension of classical signal processing theory.Comment: This will appear in SIAM Journal on Imaging Science
    • …
    corecore