80 research outputs found
Maximum Entropy and Bayesian Data Analysis: Entropic Priors
The problem of assigning probability distributions which objectively reflect
the prior information available about experiments is one of the major stumbling
blocks in the use of Bayesian methods of data analysis. In this paper the
method of Maximum (relative) Entropy (ME) is used to translate the information
contained in the known form of the likelihood into a prior distribution for
Bayesian inference. The argument is inspired and guided by intuition gained
from the successful use of ME methods in statistical mechanics. For experiments
that cannot be repeated the resulting "entropic prior" is formally identical
with the Einstein fluctuation formula. For repeatable experiments, however, the
expected value of the entropy of the likelihood turns out to be relevant
information that must be included in the analysis. The important case of a
Gaussian likelihood is treated in detail.Comment: 23 pages, 2 figure
Magnetic Field Amplification in Galaxy Clusters and its Simulation
We review the present theoretical and numerical understanding of magnetic
field amplification in cosmic large-scale structure, on length scales of galaxy
clusters and beyond. Structure formation drives compression and turbulence,
which amplify tiny magnetic seed fields to the microGauss values that are
observed in the intracluster medium. This process is intimately connected to
the properties of turbulence and the microphysics of the intra-cluster medium.
Additional roles are played by merger induced shocks that sweep through the
intra-cluster medium and motions induced by sloshing cool cores. The accurate
simulation of magnetic field amplification in clusters still poses a serious
challenge for simulations of cosmological structure formation. We review the
current literature on cosmological simulations that include magnetic fields and
outline theoretical as well as numerical challenges.Comment: 60 pages, 19 Figure
Personality traits and mental disorders
Peer reviewe
- …