1,384 research outputs found

    The Denoised, Deconvolved, and Decomposed Fermi γ\gamma-ray sky - An application of the D3^3PO algorithm

    Get PDF
    We analyze the 6.5yr all-sky data from the Fermi LAT restricted to gamma-ray photons with energies between 0.6-307.2GeV. Raw count maps show a superposition of diffuse and point-like emission structures and are subject to shot noise and instrumental artifacts. Using the D3PO inference algorithm, we model the observed photon counts as the sum of a diffuse and a point-like photon flux, convolved with the instrumental beam and subject to Poissonian shot noise. D3PO performs a Bayesian inference in this setting without the use of spatial or spectral templates;i.e., it removes the shot noise, deconvolves the instrumental response, and yields estimates for the two flux components separately. The non-parametric reconstruction uncovers the morphology of the diffuse photon flux up to several hundred GeV. We present an all-sky spectral index map for the diffuse component. We show that the diffuse gamma-ray flux can be described phenomenologically by only two distinct components: a soft component, presumably dominated by hadronic processes, tracing the dense, cold interstellar medium and a hard component, presumably dominated by leptonic interactions, following the hot and dilute medium and outflows such as the Fermi bubbles. A comparison of the soft component with the Galactic dust emission indicates that the dust-to-soft-gamma ratio in the interstellar medium decreases with latitude. The spectrally hard component exists in a thick Galactic disk and tends to flow out of the Galaxy at some locations. Furthermore, we find the angular power spectrum of the diffuse flux to roughly follow a power law with an index of 2.47 on large scales, independent of energy. Our first catalog of source candidates includes 3106 candidates of which we associate 1381(1897) with known sources from the 2nd(3rd) Fermi catalog. We observe gamma-ray emission in the direction of a few galaxy clusters hosting radio halos.Comment: re-submission after referee report (A&A); 17 pages, many colorful figures, 4 tables; bug fixed, flux scale now consistent with Fermi, even lower residual level, pDF -> 1DF source catalog, tentative detection of a few clusters of galaxies, online material http://www.mpa-garching.mpg.de/ift/fermi

    Simulation of stochastic network dynamics via entropic matching

    Full text link
    The simulation of complex stochastic network dynamics arising, for instance, from models of coupled biomolecular processes remains computationally challenging. Often, the necessity to scan a models' dynamics over a large parameter space renders full-fledged stochastic simulations impractical, motivating approximation schemes. Here we propose an approximation scheme which improves upon the standard linear noise approximation while retaining similar computational complexity. The underlying idea is to minimize, at each time step, the Kullback-Leibler divergence between the true time evolved probability distribution and a Gaussian approximation (entropic matching). This condition leads to ordinary differential equations for the mean and the covariance matrix of the Gaussian. For cases of weak nonlinearity, the method is more accurate than the linear method when both are compared to stochastic simulations.Comment: 23 pages, 6 figures; significantly revised versio

    Fast and precise way to calculate the posterior for the local non-Gaussianity parameter fnlf_\text{nl} from cosmic microwave background observations

    Full text link
    We present an approximate calculation of the full Bayesian posterior probability distribution for the local non-Gaussianity parameter fnlf_{\text{nl}} from observations of cosmic microwave background anisotropies within the framework of information field theory. The approximation that we introduce allows us to dispense with numerically expensive sampling techniques. We use a novel posterior validation method (DIP test) in cosmology to test the precision of our method. It transfers inaccuracies of the calculated posterior into deviations from a uniform distribution for a specially constructed test quantity. For this procedure we study toy cases that use one- and two-dimensional flat skies, as well as the full spherical sky. We find that we are able to calculate the posterior precisely under a flat-sky approximation, albeit not in the spherical case. We argue that this is most likely due to an insufficient precision of the used numerical implementation of the spherical harmonic transform, which might affect other non-Gaussianity estimators as well. Furthermore, we present how a nonlinear reconstruction of the primordial gravitational potential on the full spherical sky can be obtained in principle. Using the flat-sky approximation, we find deviations for the posterior of fnlf_{\text{nl}} from a Gaussian shape that become more significant for larger values of the underlying true fnlf_{\text{nl}}. We also perform a comparison to the well-known estimator of Komatsu et al. [Astrophys. J. 634, 14 (2005)] and finally derive the posterior for the local non-Gaussianity parameter gnlg_{\text{nl}} as an example of how to extend the introduced formalism to higher orders of non-Gaussianity

    Signal inference with unknown response: Calibration-uncertainty renormalized estimator

    Full text link
    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of CURE, developed in the framework of information field theory, is starting with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify CURE by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a non-iterative alternative to it

    Improving self-calibration

    Full text link
    Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore better schemes -- in sense of minimal square error -- can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.Comment: 17 pages, 3 figures, revised version, title change
    corecore