1,304 research outputs found

    Joint Reconstruction of Multi-channel, Spectral CT Data via Constrained Total Nuclear Variation Minimization

    Full text link
    We explore the use of the recently proposed "total nuclear variation" (TNV) as a regularizer for reconstructing multi-channel, spectral CT images. This convex penalty is a natural extension of the total variation (TV) to vector-valued images and has the advantage of encouraging common edge locations and a shared gradient direction among image channels. We show how it can be incorporated into a general, data-constrained reconstruction framework and derive update equations based on the first-order, primal-dual algorithm of Chambolle and Pock. Early simulation studies based on the numerical XCAT phantom indicate that the inter-channel coupling introduced by the TNV leads to better preservation of image features at high levels of regularization, compared to independent, channel-by-channel TV reconstructions.Comment: Submitted to Physics in Medicine and Biolog

    Estimating the Spectrum in Computed Tomography Via Kullback–Leibler Divergence Constrained Optimization

    Get PDF
    Purpose We study the problem of spectrum estimation from transmission data of a known phantom. The goal is to reconstruct an x‐ray spectrum that can accurately model the x‐ray transmission curves and reflects a realistic shape of the typical energy spectra of the CT system. Methods Spectrum estimation is posed as an optimization problem with x‐ray spectrum as unknown variables, and a Kullback–Leibler (KL)‐divergence constraint is employed to incorporate prior knowledge of the spectrum and enhance numerical stability of the estimation process. The formulated constrained optimization problem is convex and can be solved efficiently by use of the exponentiated‐gradient (EG) algorithm. We demonstrate the effectiveness of the proposed approach on the simulated and experimental data. The comparison to the expectation–maximization (EM) method is also discussed. Results In simulations, the proposed algorithm is seen to yield x‐ray spectra that closely match the ground truth and represent the attenuation process of x‐ray photons in materials, both included and not included in the estimation process. In experiments, the calculated transmission curve is in good agreement with the measured transmission curve, and the estimated spectra exhibits physically realistic looking shapes. The results further show the comparable performance between the proposed optimization‐based approach and EM. Conclusions Our formulation of a constrained optimization provides an interpretable and flexible framework for spectrum estimation. Moreover, a KL‐divergence constraint can include a prior spectrum and appears to capture important features of x‐ray spectrum, allowing accurate and robust estimation of x‐ray spectrum in CT imaging

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    An algorithm for constrained one-step inversion of spectral CT data

    Get PDF
    We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.Comment: Submitted to Physics in Medicine and Biolog

    Spectral2Spectral: Image-spectral Similarity Assisted Spectral CT Deep Reconstruction without Reference

    Full text link
    The photon-counting detector (PCD) based spectral computed tomography attracts much more attentions since it has the capability to provide more accurate identification and quantitative analysis for biomedical materials. The limited number of photons within narrow energy-bin leads to low signal-noise ratio data. The existing supervised deep reconstruction networks for CT reconstruction are difficult to address these challenges. In this paper, we propose an iterative deep reconstruction network to synergize model and data priors into a unified framework, named as Spectral2Spectral. Our Spectral2Spectral employs an unsupervised deep training strategy to obtain high-quality images from noisy data with an end-to-end fashion. The structural similarity prior within image-spectral domain is refined as a regularization term to further constrain the network training. The weights of neural network are automatically updated to capture image features and structures with iterative process. Three large-scale preclinical datasets experiments demonstrate that the Spectral2spectral reconstruct better image quality than other state-of-the-art methods

    One-step iterative reconstruction approach based on eigentissue decomposition for spectral photon-counting computed tomography

    Get PDF
    Purpose: We propose a one-step tissue characterization method for spectral photon-counting computed tomography (SPCCT) using eigentissue decomposition (ETD), tailored for highly accurate human tissue characterization in radiotherapy. Methods: The approach combines a Poisson likelihood, a spatial prior, and a quantitative prior constraining eigentissue fractions based on expected values for tabulated tissues. There are two regularization parameters: α for the quantitative prior, and β for the spatial prior. The approach is validated in a realistic simulation environment for SPCCT. The impact of α and β is evaluated on a virtual phantom. The framework is tested on a virtual patient and compared with two sinogram-based two-step methods [using respectively filtered backprojection (FBP) and an iterative method for the second step] and a post-reconstruction approach with the same quantitative prior. All methods use ETD. Results: Optimal performance with respect to bias or RMSE is achieved with different combinations of α and β on the cylindrical phantom. Evaluated in tissues of the virtual patient, the one-step framework outperforms two-step and post-reconstruction approaches to quantify proton-stopping power (SPR). The mean absolute bias on the SPR is 0.6% (two-step FBP), 0.6% (two-step iterative), 0.6% (post-reconstruction), and 0.2% (one-step optimized for low bias). Following the same order, the RMSE on the SPR is 13.3%, 2.5%, 3.2%, and 1.5%. Conclusions: Accurate and precise characterization with ETD can be achieved with noisy SPCCT data without the need to rely on post-reconstruction methods. The one-step framework is more accurate and precise than two-step methods for human tissue characterization
    corecore