42 research outputs found
DPO - Denoising, Deconvolving, and Decomposing Photon Observations
The analysis of astronomical images is a non-trivial task. The D3PO algorithm
addresses the inference problem of denoising, deconvolving, and decomposing
photon observations. Its primary goal is the simultaneous but individual
reconstruction of the diffuse and point-like photon flux given a single photon
count image, where the fluxes are superimposed. In order to discriminate
between these morphologically different signal components, a probabilistic
algorithm is derived in the language of information field theory based on a
hierarchical Bayesian parameter model. The signal inference exploits prior
information on the spatial correlation structure of the diffuse component and
the brightness distribution of the spatially uncorrelated point-like sources. A
maximum a posteriori solution and a solution minimizing the Gibbs free energy
of the inference problem using variational Bayesian methods are discussed.
Since the derivation of the solution is not dependent on the underlying
position space, the implementation of the D3PO algorithm uses the NIFTY package
to ensure applicability to various spatial grids and at any resolution. The
fidelity of the algorithm is validated by the analysis of simulated data,
including a realistic high energy photon count image showing a 32 x 32 arcmin^2
observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO
algorithm successfully denoised, deconvolved, and decomposed the data into a
diffuse and a point-like signal estimate for the respective photon flux
components.Comment: 22 pages, 8 figures, 2 tables, accepted by Astronomy & Astrophysics;
refereed version, 1 figure added, results unchanged, software available at
http://www.mpa-garching.mpg.de/ift/d3po
GeV excess and phenomenological astrophysics modeling
Predefined spatial templates to describe the background of -ray
emission from astrophysical processes, like cosmic ray interactions, are used
in previous searches for the -ray signatures of annihilating galactic
dark matter. In this proceeding, we investigate the GeV excess in the inner
Galaxy using an alternative approach, in which the astrophysical components are
identified solely by their spectral and morphological properties. We confirm
the reported GeV excess and derive related parameters for dark matter
interpretation, which are consistent with previous results. We investigate the
morphology of this spectral excess as preferred by the data only. This emission
component exhibits a central Galaxy cusp as expected for a dark matter
annihilation signal. However, Galactic disk regions with a morphology of that
of the hot interstellar medium also host such a spectral component. This points
to a possible astrophysical origin of the excess and requests a more detailed
understanding of astrophysical -ray emitting processes in the galactic
center region before definite claims about a dark matter annihilation signal
can be made.Comment: 5 pages, 4 figures. Prepared for the proceedings of the TAUP
Conference 2015, Turi
The Denoised, Deconvolved, and Decomposed Fermi -ray sky - An application of the DPO algorithm
We analyze the 6.5yr all-sky data from the Fermi LAT restricted to gamma-ray
photons with energies between 0.6-307.2GeV. Raw count maps show a superposition
of diffuse and point-like emission structures and are subject to shot noise and
instrumental artifacts. Using the D3PO inference algorithm, we model the
observed photon counts as the sum of a diffuse and a point-like photon flux,
convolved with the instrumental beam and subject to Poissonian shot noise. D3PO
performs a Bayesian inference in this setting without the use of spatial or
spectral templates;i.e., it removes the shot noise, deconvolves the
instrumental response, and yields estimates for the two flux components
separately. The non-parametric reconstruction uncovers the morphology of the
diffuse photon flux up to several hundred GeV. We present an all-sky spectral
index map for the diffuse component. We show that the diffuse gamma-ray flux
can be described phenomenologically by only two distinct components: a soft
component, presumably dominated by hadronic processes, tracing the dense, cold
interstellar medium and a hard component, presumably dominated by leptonic
interactions, following the hot and dilute medium and outflows such as the
Fermi bubbles. A comparison of the soft component with the Galactic dust
emission indicates that the dust-to-soft-gamma ratio in the interstellar medium
decreases with latitude. The spectrally hard component exists in a thick
Galactic disk and tends to flow out of the Galaxy at some locations.
Furthermore, we find the angular power spectrum of the diffuse flux to roughly
follow a power law with an index of 2.47 on large scales, independent of
energy. Our first catalog of source candidates includes 3106 candidates of
which we associate 1381(1897) with known sources from the 2nd(3rd) Fermi
catalog. We observe gamma-ray emission in the direction of a few galaxy
clusters hosting radio halos.Comment: re-submission after referee report (A&A); 17 pages, many colorful
figures, 4 tables; bug fixed, flux scale now consistent with Fermi, even
lower residual level, pDF -> 1DF source catalog, tentative detection of a few
clusters of galaxies, online material
http://www.mpa-garching.mpg.de/ift/fermi
Simulation of stochastic network dynamics via entropic matching
The simulation of complex stochastic network dynamics arising, for instance,
from models of coupled biomolecular processes remains computationally
challenging. Often, the necessity to scan a models' dynamics over a large
parameter space renders full-fledged stochastic simulations impractical,
motivating approximation schemes. Here we propose an approximation scheme which
improves upon the standard linear noise approximation while retaining similar
computational complexity. The underlying idea is to minimize, at each time
step, the Kullback-Leibler divergence between the true time evolved probability
distribution and a Gaussian approximation (entropic matching). This condition
leads to ordinary differential equations for the mean and the covariance matrix
of the Gaussian. For cases of weak nonlinearity, the method is more accurate
than the linear method when both are compared to stochastic simulations.Comment: 23 pages, 6 figures; significantly revised versio
Fast and precise way to calculate the posterior for the local non-Gaussianity parameter from cosmic microwave background observations
We present an approximate calculation of the full Bayesian posterior
probability distribution for the local non-Gaussianity parameter
from observations of cosmic microwave background anisotropies
within the framework of information field theory. The approximation that we
introduce allows us to dispense with numerically expensive sampling techniques.
We use a novel posterior validation method (DIP test) in cosmology to test the
precision of our method. It transfers inaccuracies of the calculated posterior
into deviations from a uniform distribution for a specially constructed test
quantity. For this procedure we study toy cases that use one- and
two-dimensional flat skies, as well as the full spherical sky. We find that we
are able to calculate the posterior precisely under a flat-sky approximation,
albeit not in the spherical case. We argue that this is most likely due to an
insufficient precision of the used numerical implementation of the spherical
harmonic transform, which might affect other non-Gaussianity estimators as
well. Furthermore, we present how a nonlinear reconstruction of the primordial
gravitational potential on the full spherical sky can be obtained in principle.
Using the flat-sky approximation, we find deviations for the posterior of
from a Gaussian shape that become more significant for larger
values of the underlying true . We also perform a comparison to
the well-known estimator of Komatsu et al. [Astrophys. J. 634, 14 (2005)] and
finally derive the posterior for the local non-Gaussianity parameter
as an example of how to extend the introduced formalism to
higher orders of non-Gaussianity
Signal inference with unknown response: Calibration-uncertainty renormalized estimator
The calibration of a measurement device is crucial for every scientific
experiment, where a signal has to be inferred from data. We present CURE, the
calibration uncertainty renormalized estimator, to reconstruct a signal and
simultaneously the instrument's calibration from the same data without knowing
the exact calibration, but its covariance structure. The idea of CURE,
developed in the framework of information field theory, is starting with an
assumed calibration to successively include more and more portions of
calibration uncertainty into the signal inference equations and to absorb the
resulting corrections into renormalized signal (and calibration) solutions.
Thereby, the signal inference and calibration problem turns into solving a
single system of ordinary differential equations and can be identified with
common resummation techniques used in field theories. We verify CURE by
applying it to a simplistic toy example and compare it against existent
self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo
sampling. We conclude that the method is able to keep up in accuracy with the
best self-calibration methods and serves as a non-iterative alternative to it
Improving self-calibration
Response calibration is the process of inferring how much the measured data
depend on the signal one is interested in. It is essential for any quantitative
signal estimation on the basis of the data. Here, we investigate
self-calibration methods for linear signal measurements and linear dependence
of the response on the calibration parameters. The common practice is to
augment an external calibration solution using a known reference signal with an
internal calibration on the unknown measurement signal itself. Contemporary
self-calibration schemes try to find a self-consistent solution for signal and
calibration by exploiting redundancies in the measurements. This can be
understood in terms of maximizing the joint probability of signal and
calibration. However, the full uncertainty structure of this joint probability
around its maximum is thereby not taken into account by these schemes.
Therefore better schemes -- in sense of minimal square error -- can be designed
by accounting for asymmetries in the uncertainty of signal and calibration. We
argue that at least a systematic correction of the common self-calibration
scheme should be applied in many measurement situations in order to properly
treat uncertainties of the signal on which one calibrates. Otherwise the
calibration solutions suffer from a systematic bias, which consequently
distorts the signal reconstruction. Furthermore, we argue that non-parametric,
signal-to-noise filtered calibration should provide more accurate
reconstructions than the common bin averages and provide a new, improved
self-calibration scheme. We illustrate our findings with a simplistic numerical
example.Comment: 17 pages, 3 figures, revised version, title change
Improving stochastic estimates with inference methods: calculating matrix diagonals
Estimating the diagonal entries of a matrix, that is not directly accessible
but only available as a linear operator in the form of a computer routine, is a
common necessity in many computational applications, especially in image
reconstruction and statistical inference. Here, methods of statistical
inference are used to improve the accuracy or the computational costs of matrix
probing methods to estimate matrix diagonals. In particular, the generalized
Wiener filter methodology, as developed within information field theory, is
shown to significantly improve estimates based on only a few sampling probes,
in cases in which some form of continuity of the solution can be assumed. The
strength, length scale, and precise functional form of the exploited
autocorrelation function of the matrix diagonal is determined from the probes
themselves. The developed algorithm is successfully applied to mock and real
world problems. These performance tests show that, in situations where a matrix
diagonal has to be calculated from only a small number of computationally
expensive probes, a speedup by a factor of 2 to 10 is possible with the
proposed method.Comment: 9 pages, 6 figures, accepted by Phys. Rev. E; introduction revised,
results unchanged; page proofs implemente