32,340 research outputs found

    Estimating the Spectrum in Computed Tomography Via Kullback–Leibler Divergence Constrained Optimization

    Get PDF
    Purpose We study the problem of spectrum estimation from transmission data of a known phantom. The goal is to reconstruct an x‐ray spectrum that can accurately model the x‐ray transmission curves and reflects a realistic shape of the typical energy spectra of the CT system. Methods Spectrum estimation is posed as an optimization problem with x‐ray spectrum as unknown variables, and a Kullback–Leibler (KL)‐divergence constraint is employed to incorporate prior knowledge of the spectrum and enhance numerical stability of the estimation process. The formulated constrained optimization problem is convex and can be solved efficiently by use of the exponentiated‐gradient (EG) algorithm. We demonstrate the effectiveness of the proposed approach on the simulated and experimental data. The comparison to the expectation–maximization (EM) method is also discussed. Results In simulations, the proposed algorithm is seen to yield x‐ray spectra that closely match the ground truth and represent the attenuation process of x‐ray photons in materials, both included and not included in the estimation process. In experiments, the calculated transmission curve is in good agreement with the measured transmission curve, and the estimated spectra exhibits physically realistic looking shapes. The results further show the comparable performance between the proposed optimization‐based approach and EM. Conclusions Our formulation of a constrained optimization provides an interpretable and flexible framework for spectrum estimation. Moreover, a KL‐divergence constraint can include a prior spectrum and appears to capture important features of x‐ray spectrum, allowing accurate and robust estimation of x‐ray spectrum in CT imaging

    Measurability of kinetic temperature from metal absorption-line spectra formed in chaotic media

    Get PDF
    We present a new method for recovering the kinetic temperature of the intervening diffuse gas to an accuracy of 10%. The method is based on the comparison of unsaturated absorption-line profiles of two species with different atomic weights. The species are assumed to have the same temperature and bulk motion within the absorbing region. The computational technique involves the Fourier transform of the absorption profiles and the consequent Entropy-Regularized chi^2-Minimization [ERM] to estimate the model parameters. The procedure is tested using synthetic spectra of CII, SiII and FeII ions. The comparison with the standard Voigt fitting analysis is performed and it is shown that the Voigt deconvolution of the complex absorption-line profiles may result in estimated temperatures which are not physical. We also successfully analyze Keck telescope spectra of CII1334 and SiII1260 lines observed at the redshift z = 3.572 toward the quasar Q1937--1009 by Tytler {\it et al.}.Comment: 25 pages, 6 Postscript figures, aaspp4.sty file, submit. Ap

    Component separation methods for the Planck mission

    Get PDF
    The Planck satellite will map the full sky at nine frequencies from 30 to 857 GHz. The CMB intensity and polarization that are its prime targets are contaminated by foreground emission. The goal of this paper is to compare proposed methods for separating CMB from foregrounds based on their different spectral and spatial characteristics, and to separate the foregrounds into components of different physical origin. A component separation challenge has been organized, based on a set of realistically complex simulations of sky emission. Several methods including those based on internal template subtraction, maximum entropy method, parametric method, spatial and harmonic cross correlation methods, and independent component analysis have been tested. Different methods proved to be effective in cleaning the CMB maps from foreground contamination, in reconstructing maps of diffuse Galactic emissions, and in detecting point sources and thermal Sunyaev-Zeldovich signals. The power spectrum of the residuals is, on the largest scales, four orders of magnitude lower than that of the input Galaxy power spectrum at the foreground minimum. The CMB power spectrum was accurately recovered up to the sixth acoustic peak. The point source detection limit reaches 100 mJy, and about 2300 clusters are detected via the thermal SZ effect on two thirds of the sky. We have found that no single method performs best for all scientific objectives. We foresee that the final component separation pipeline for Planck will involve a combination of methods and iterations between processing steps targeted at different objectives such as diffuse component separation, spectral estimation and compact source extraction.Comment: Matches version accepted by A&A. A version with high resolution figures is available at http://people.sissa.it/~leach/compsepcomp.pd

    Likelihood Analysis of Power Spectra and Generalized Moment Problems

    Full text link
    We develop an approach to spectral estimation that has been advocated by Ferrante, Masiero and Pavon and, in the context of the scalar-valued covariance extension problem, by Enqvist and Karlsson. The aim is to determine the power spectrum that is consistent with given moments and minimizes the relative entropy between the probability law of the underlying Gaussian stochastic process to that of a prior. The approach is analogous to the framework of earlier work by Byrnes, Georgiou and Lindquist and can also be viewed as a generalization of the classical work by Burg and Jaynes on the maximum entropy method. In the present paper we present a new fast algorithm in the general case (i.e., for general Gaussian priors) and show that for priors with a specific structure the solution can be given in closed form.Comment: 17 pages, 4 figure

    Spectral Norm Regularization for Improving the Generalizability of Deep Learning

    Full text link
    We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods

    Foreground separation using a flexible maximum-entropy algorithm: an application to COBE data

    Get PDF
    A flexible maximum-entropy component separation algorithm is presented that accommodates anisotropic noise, incomplete sky-coverage and uncertainties in the spectral parameters of foregrounds. The capabilities of the method are determined by first applying it to simulated spherical microwave data sets emulating the COBE-DMR, COBE-DIRBE and Haslam surveys. Using these simulations we find that is very difficult to determine unambiguously the spectral parameters of the galactic components for this data set due to their high level of noise. Nevertheless, we show that is possible to find a robust CMB reconstruction, especially at the high galactic latitude. The method is then applied to these real data sets to obtain reconstructions of the CMB component and galactic foreground emission over the whole sky. The best reconstructions are found for values of the spectral parameters: T_d=19 K, alpha_d=2, beta_ff=-0.19 and beta_syn=-0.8. The CMB map has been recovered with an estimated statistical error of \sim 22 muK on an angular scale of 7 degrees outside the galactic cut whereas the low galactic latitude region presents contamination from the foreground emissions.Comment: 29 pages, 25 figures, version accepted for publication in MNRAS. One subsection and 6 figures added. Main results unchange

    Minimum Relative Entropy for Quantum Estimation: Feasibility and General Solution

    Get PDF
    We propose a general framework for solving quantum state estimation problems using the minimum relative entropy criterion. A convex optimization approach allows us to decide the feasibility of the problem given the data and, whenever necessary, to relax the constraints in order to allow for a physically admissible solution. Building on these results, the variational analysis can be completed ensuring existence and uniqueness of the optimum. The latter can then be computed by standard, efficient standard algorithms for convex optimization, without resorting to approximate methods or restrictive assumptions on its rank.Comment: 9 pages, no figure
    corecore