3,166 research outputs found
Why bayesian âevidence for H1â in one condition and bayesian âevidence for H0â in another condition does not mean good-enough bayesian evidence for a difference between the conditions
Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 Ă 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design
Bayesian Analysis of Inflation II: Model Selection and Constraints on Reheating
We discuss the model selection problem for inflationary cosmology. We couple
ModeCode, a publicly-available numerical solver for the primordial perturbation
spectra, to the nested sampler MultiNest, in order to efficiently compute
Bayesian evidence. Particular attention is paid to the specification of
physically realistic priors, including the parametrization of the
post-inflationary expansion and associated thermalization scale. It is
confirmed that while present-day data tightly constrains the properties of the
power spectrum, it cannot usefully distinguish between the members of a large
class of simple inflationary models. We also compute evidence using a simulated
Planck likelihood, showing that while Planck will have more power than WMAP to
discriminate between inflationary models, it will not definitively address the
inflationary model selection problem on its own. However, Planck will place
very tight constraints on any model with more than one observationally-distinct
inflationary regime -- e.g. the large- and small-field limits of the hilltop
inflation model -- and put useful limits on different reheating scenarios for a
given model.Comment: ModeCode package available from
http://zuserver2.star.ucl.ac.uk/~hiranya/ModeCode/ModeCode (requires CosmoMC
and MultiNest); to be published in PRD. Typos fixe
Attempts to detect retrotransposition and de novo deletion of Alus and other dispersed repeats at specific loci in the human genome
Dispersed repeat elements contribute to genome instability by de novo insertion and unequal recombination between repeats. To study the dynamics of these processes, we have developed single DNA molecule approaches to detect de novo insertions at a single locus and Alu-mediated deletions at two different loci in human genomic DNA. Validation experiments showed these approaches could detect insertions and deletions at frequencies below 10(-6) per cell. However, bulk analysis of germline (sperm) and somatic DNA showed no evidence for genuine mutant molecules, placing an upper limit of insertion and deletion rates of 2 x 10(-7) and 3 x 10(-7), respectively, in the individuals tested. Such re-arrangements at these loci therefore occur at a rate lower than that detectable by the most sensitive methods currently available
XAFS spectroscopy. I. Extracting the fine structure from the absorption spectra
Three independent techniques are used to separate fine structure from the
absorption spectra, the background function in which is approximated by (i)
smoothing spline. We propose a new reliable criterion for determination of
smoothing parameter and the method for raising of stability with respect to
k_min variation; (ii) interpolation spline with the varied knots; (iii) the
line obtained from bayesian smoothing. This methods considers various prior
information and includes a natural way to determine the errors of XAFS
extraction. Particular attention has been given to the estimation of
uncertainties in XAFS data. Experimental noise is shown to be essentially
smaller than the errors of the background approximation, and it is the latter
that determines the variances of structural parameters in subsequent fitting.Comment: 16 pages, 7 figures, for freeware XAFS analysis program, see
http://www.crosswinds.net/~klmn/viper.htm
Determining the Neutrino Mass Hierarchy with Cosmology
The combination of current large scale structure and cosmic microwave
background (CMB) anisotropies data can place strong constraints on the sum of
the neutrino masses. Here we show that future cosmic shear experiments, in
combination with CMB constraints, can provide the statistical accuracy required
to answer questions about differences in the mass of individual neutrino
species. Allowing for the possibility that masses are non-degenerate we combine
Fisher matrix forecasts for a weak lensing survey like Euclid with those for
the forthcoming Planck experiment. Under the assumption that neutrino mass
splitting is described by a normal hierarchy we find that the combination
Planck and Euclid will possibly reach enough sensitivity to put a constraint on
the mass of a single species. Using a Bayesian evidence calculation we find
that such future experiments could provide strong evidence for either a normal
or an inverted neutrino hierachy. Finally we show that if a particular neutrino
hierachy is assumed then this could bias cosmological parameter constraints,
for example the dark energy equation of state parameter, by > 1\sigma, and the
sum of masses by 2.3\sigma.Comment: 9 pages, 6 figures, 3 table
Constructing smooth potentials of mean force, radial, distribution functions and probability densities from sampled data
In this paper a method of obtaining smooth analytical estimates of
probability densities, radial distribution functions and potentials of mean
force from sampled data in a statistically controlled fashion is presented. The
approach is general and can be applied to any density of a single random
variable. The method outlined here avoids the use of histograms, which require
the specification of a physical parameter (bin size) and tend to give noisy
results. The technique is an extension of the Berg-Harris method [B.A. Berg and
R.C. Harris, Comp. Phys. Comm. 179, 443 (2008)], which is typically inaccurate
for radial distribution functions and potentials of mean force due to a
non-uniform Jacobian factor. In addition, the standard method often requires a
large number of Fourier modes to represent radial distribution functions, which
tends to lead to oscillatory fits. It is shown that the issues of poor sampling
due to a Jacobian factor can be resolved using a biased resampling scheme,
while the requirement of a large number of Fourier modes is mitigated through
an automated piecewise construction approach. The method is demonstrated by
analyzing the radial distribution functions in an energy-discretized water
model. In addition, the fitting procedure is illustrated on three more
applications for which the original Berg-Harris method is not suitable, namely,
a random variable with a discontinuous probability density, a density with long
tails, and the distribution of the first arrival times of a diffusing particle
to a sphere, which has both long tails and short-time structure. In all cases,
the resampled, piecewise analytical fit outperforms the histogram and the
original Berg-Harris method.Comment: 14 pages, 15 figures. To appear in J. Chem. Phy
Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
The analysis of gravitational wave data involves many model selection
problems. The most important example is the detection problem of selecting
between the data being consistent with instrument noise alone, or instrument
noise and a gravitational wave signal. The analysis of data from ground based
gravitational wave detectors is mostly conducted using classical statistics,
and methods such as the Neyman-Pearson criteria are used for model selection.
Future space based detectors, such as the \emph{Laser Interferometer Space
Antenna} (LISA), are expected to produced rich data streams containing the
signals from many millions of sources. Determining the number of sources that
are resolvable, and the most appropriate description of each source poses a
challenging model selection problem that may best be addressed in a Bayesian
framework. An important class of LISA sources are the millions of low-mass
binary systems within our own galaxy, tens of thousands of which will be
detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant
subset of the resolvable galactic binaries will exhibit orbital frequency
evolution, while a smaller number will have measurable eccentricity. In the
Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes
factors in the context of determining which galactic binaries have measurable
frequency evolution. The methods explored include a Reverse Jump Markov Chain
Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes
Information Criterion (BIC), and the Laplace approximation to the model
evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
Extended Heat-Fluctuation Theorems for a System with Deterministic and Stochastic Forces
Heat fluctuations over a time \tau in a non-equilibrium stationary state and
in a transient state are studied for a simple system with deterministic and
stochastic components: a Brownian particle dragged through a fluid by a
harmonic potential which is moved with constant velocity. Using a Langevin
equation, we find the exact Fourier transform of the distribution of these
fluctuations for all \tau. By a saddle-point method we obtain analytical
results for the inverse Fourier transform, which, for not too small \tau, agree
very well with numerical results from a sampling method as well as from the
fast Fourier transform algorithm. Due to the interaction of the deterministic
part of the motion of the particle in the mechanical potential with the
stochastic part of the motion caused by the fluid, the conventional heat
fluctuation theorem is, for infinite and for finite \tau, replaced by an
extended fluctuation theorem that differs noticeably and measurably from it. In
particular, for large fluctuations, the ratio of the probability for absorption
of heat (by the particle from the fluid) to the probability to supply heat (by
the particle to the fluid) is much larger here than in the conventional
fluctuation theorem.Comment: 23 pages, 6 figures. Figures are now in color, Eq. (67) was corrected
and a footnote was added on the d-dimensional cas
Gravitational oscillations in multidimensional anisotropic model with cosmological constant and their contributions into the energy of vacuum
Were studied classical oscillations of background metric in the
multidimensional anisotropic model of Kazner in the de-Sitter stage. Obtained
dependence of fluctuations on dimension of space-time with infinite expansion.
Stability of the model could be achieved when number of space-like dimensions
equals or more then four. Were calculated contributions to the density of
"vacuum energy", that are providing by proper oscillations of background metric
and compared with contribution of cosmological arising of particles due to
expansion. As it turned out, contribution of gravitational oscillation of
metric into density of "vacuum energy" should play significant role in the
de-Sitter stage
- âŠ