39,075 research outputs found
On holographic dark-energy models
Different holographic dark-energy models are studied from a unifying point of
view. We compare models for which the Hubble scale, the future event horizon or
a quantity proportional to the Ricci scale are taken as the infrared cutoff
length. We demonstrate that the mere definition of the holographic dark-energy
density generally implies an interaction with the dark-matter component. We
discuss the relation between the equation-of-state parameter and the energy
density ratio of both components for each of the choices, as well as the
possibility of non-interacting and scaling solutions. Parameter estimations for
all three cutoff options are performed with the help of a Bayesian statistical
analysis, using data from supernovae type Ia and the history of the Hubble
parameter. The CDM model is the clear winner of the analysis.
According to the Bayesian Information Criterion (), all holographic models
should be considered as ruled out, since the difference to the
corresponding CDM value is . According to the Akaike Information
Criterion (), however, we find for models with
Hubble-scale and Ricci-scale cutoffs, indicating, that they may still be
competitive. As we show for the example of the Ricci-scale case, also the use
of certain priors, reducing the number of free parameters to that of the
CDM model, may result in a competitive holographic model.Comment: 37 pages, 11 figures, 3 tables, statistical analysis improved,
accepted for publication in Phys.Rev.
Data Assimilation: A Mathematical Introduction
These notes provide a systematic mathematical treatment of the subject of
data assimilation
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
Free Energy Methods for Bayesian Inference: Efficient Exploration of Univariate Gaussian Mixture Posteriors
Because of their multimodality, mixture posterior distributions are difficult
to sample with standard Markov chain Monte Carlo (MCMC) methods. We propose a
strategy to enhance the sampling of MCMC in this context, using a biasing
procedure which originates from computational Statistical Physics. The
principle is first to choose a "reaction coordinate", that is, a "direction" in
which the target distribution is multimodal. In a second step, the marginal
log-density of the reaction coordinate with respect to the posterior
distribution is estimated; minus this quantity is called "free energy" in the
computational Statistical Physics literature. To this end, we use adaptive
biasing Markov chain algorithms which adapt their targeted invariant
distribution on the fly, in order to overcome sampling barriers along the
chosen reaction coordinate. Finally, we perform an importance sampling step in
order to remove the bias and recover the true posterior. The efficiency factor
of the importance sampling step can easily be estimated \emph{a priori} once
the bias is known, and appears to be rather large for the test cases we
considered. A crucial point is the choice of the reaction coordinate. One
standard choice (used for example in the classical Wang-Landau algorithm) is
minus the log-posterior density. We discuss other choices. We show in
particular that the hyper-parameter that determines the order of magnitude of
the variance of each component is both a convenient and an efficient reaction
coordinate. We also show how to adapt the method to compute the evidence
(marginal likelihood) of a mixture model. We illustrate our approach by
analyzing two real data sets
- …