6,226 research outputs found

    A Spectral CT Method to Directly Estimate Basis Material Maps From Experimental Photon-Counting Data

    Get PDF
    The proposed spectral CT method solves the constrained one-step spectral CT reconstruction (cOSSCIR) optimization problem to estimate basis material maps while modeling the nonlinear X-ray detection process and enforcing convex constraints on the basis map images. In order to apply the optimization-based reconstruction approach to experimental data, the presented method empirically estimates the effective energy-window spectra using a calibration procedure. The amplitudes of the estimated spectra were further optimized as part of the reconstruction process to reduce ring artifacts. A validation approach was developed to select constraint parameters. The proposed spectral CT method was evaluated through simulations and experiments with a photon-counting detector. Basis material map images were successfully reconstructed using the presented empirical spectral modeling and cOSSCIR optimization approach. In simulations, the cOSSCIR approach accurately reconstructed the basis map images

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Joint Reconstruction of Multi-channel, Spectral CT Data via Constrained Total Nuclear Variation Minimization

    Full text link
    We explore the use of the recently proposed "total nuclear variation" (TNV) as a regularizer for reconstructing multi-channel, spectral CT images. This convex penalty is a natural extension of the total variation (TV) to vector-valued images and has the advantage of encouraging common edge locations and a shared gradient direction among image channels. We show how it can be incorporated into a general, data-constrained reconstruction framework and derive update equations based on the first-order, primal-dual algorithm of Chambolle and Pock. Early simulation studies based on the numerical XCAT phantom indicate that the inter-channel coupling introduced by the TNV leads to better preservation of image features at high levels of regularization, compared to independent, channel-by-channel TV reconstructions.Comment: Submitted to Physics in Medicine and Biolog

    The Denoised, Deconvolved, and Decomposed Fermi γ\gamma-ray sky - An application of the D3^3PO algorithm

    Get PDF
    We analyze the 6.5yr all-sky data from the Fermi LAT restricted to gamma-ray photons with energies between 0.6-307.2GeV. Raw count maps show a superposition of diffuse and point-like emission structures and are subject to shot noise and instrumental artifacts. Using the D3PO inference algorithm, we model the observed photon counts as the sum of a diffuse and a point-like photon flux, convolved with the instrumental beam and subject to Poissonian shot noise. D3PO performs a Bayesian inference in this setting without the use of spatial or spectral templates;i.e., it removes the shot noise, deconvolves the instrumental response, and yields estimates for the two flux components separately. The non-parametric reconstruction uncovers the morphology of the diffuse photon flux up to several hundred GeV. We present an all-sky spectral index map for the diffuse component. We show that the diffuse gamma-ray flux can be described phenomenologically by only two distinct components: a soft component, presumably dominated by hadronic processes, tracing the dense, cold interstellar medium and a hard component, presumably dominated by leptonic interactions, following the hot and dilute medium and outflows such as the Fermi bubbles. A comparison of the soft component with the Galactic dust emission indicates that the dust-to-soft-gamma ratio in the interstellar medium decreases with latitude. The spectrally hard component exists in a thick Galactic disk and tends to flow out of the Galaxy at some locations. Furthermore, we find the angular power spectrum of the diffuse flux to roughly follow a power law with an index of 2.47 on large scales, independent of energy. Our first catalog of source candidates includes 3106 candidates of which we associate 1381(1897) with known sources from the 2nd(3rd) Fermi catalog. We observe gamma-ray emission in the direction of a few galaxy clusters hosting radio halos.Comment: re-submission after referee report (A&A); 17 pages, many colorful figures, 4 tables; bug fixed, flux scale now consistent with Fermi, even lower residual level, pDF -> 1DF source catalog, tentative detection of a few clusters of galaxies, online material http://www.mpa-garching.mpg.de/ift/fermi

    Experimental Comparison of Empirical Material Decomposition Methods for Spectral CT

    Get PDF
    Material composition can be estimated from spectral information acquired using photon counting x-ray detectors with pulse height analysis. Non-ideal effects in photon counting x-ray detectors such as charge-sharing, k-escape, and pulse-pileup distort the detected spectrum, which can cause material decomposition errors. This work compared the performance of two empirical decomposition methods: a neural network estimator and a linearized maximum likelihood estimator with correction (A-table method). The two investigated methods differ in how they model the nonlinear relationship between the spectral measurements and material decomposition estimates. The bias and standard deviation of material decomposition estimates were compared for the two methods, using both simulations and experiments with a photon-counting x-ray detector. Both the neural network and A-table methods demonstrated a similar performance for the simulated data. The neural network had lower standard deviation for nearly all thicknesses of the test materials in the collimated (low scatter) and uncollimated (higher scatter) experimental data. In the experimental study of Teflon thicknesses, non-ideal detector effects demonstrated a potential bias of 11–28%, which was reduced to 0.1–11% using the proposed empirical methods. Overall, the results demonstrated preliminary experimental feasibility of empirical material decomposition for spectral CT using photon-counting detectors
    corecore