5,874 research outputs found
Joining Forces of Bayesian and Frequentist Methodology: A Study for Inference in the Presence of Non-Identifiability
Increasingly complex applications involve large datasets in combination with
non-linear and high dimensional mathematical models. In this context,
statistical inference is a challenging issue that calls for pragmatic
approaches that take advantage of both Bayesian and frequentist methods. The
elegance of Bayesian methodology is founded in the propagation of information
content provided by experimental data and prior assumptions to the posterior
probability distribution of model predictions. However, for complex
applications experimental data and prior assumptions potentially constrain the
posterior probability distribution insufficiently. In these situations Bayesian
Markov chain Monte Carlo sampling can be infeasible. From a frequentist point
of view insufficient experimental data and prior assumptions can be interpreted
as non-identifiability. The profile likelihood approach offers to detect and to
resolve non-identifiability by experimental design iteratively. Therefore, it
allows one to better constrain the posterior probability distribution until
Markov chain Monte Carlo sampling can be used securely. Using an application
from cell biology we compare both methods and show that a successive
application of both methods facilitates a realistic assessment of uncertainty
in model predictions.Comment: Article to appear in Phil. Trans. Roy. Soc.
Determination of the Joint Confidence Region of Optimal Operating Conditions in Robust Design by Bootstrap Technique
Robust design has been widely recognized as a leading method in reducing
variability and improving quality. Most of the engineering statistics
literature mainly focuses on finding "point estimates" of the optimum operating
conditions for robust design. Various procedures for calculating point
estimates of the optimum operating conditions are considered. Although this
point estimation procedure is important for continuous quality improvement, the
immediate question is "how accurate are these optimum operating conditions?"
The answer for this is to consider interval estimation for a single variable or
joint confidence regions for multiple variables.
In this paper, with the help of the bootstrap technique, we develop
procedures for obtaining joint "confidence regions" for the optimum operating
conditions. Two different procedures using Bonferroni and multivariate normal
approximation are introduced. The proposed methods are illustrated and
substantiated using a numerical example.Comment: Two tables, Three figure
Application of Bayesian model averaging to measurements of the primordial power spectrum
Cosmological parameter uncertainties are often stated assuming a particular
model, neglecting the model uncertainty, even when Bayesian model selection is
unable to identify a conclusive best model. Bayesian model averaging is a
method for assessing parameter uncertainties in situations where there is also
uncertainty in the underlying model. We apply model averaging to the estimation
of the parameters associated with the primordial power spectra of curvature and
tensor perturbations. We use CosmoNest and MultiNest to compute the model
Evidences and posteriors, using cosmic microwave data from WMAP, ACBAR,
BOOMERanG and CBI, plus large-scale structure data from the SDSS DR7. We find
that the model-averaged 95% credible interval for the spectral index using all
of the data is 0.940 < n_s < 1.000, where n_s is specified at a pivot scale
0.015 Mpc^{-1}. For the tensors model averaging can tighten the credible upper
limit, depending on prior assumptions.Comment: 7 pages with 7 figures include
A Bayesian spatio-temporal model of panel design data: airborne particle number concentration in Brisbane, Australia
This paper outlines a methodology for semi-parametric spatio-temporal
modelling of data which is dense in time but sparse in space, obtained from a
split panel design, the most feasible approach to covering space and time with
limited equipment. The data are hourly averaged particle number concentration
(PNC) and were collected, as part of the Ultrafine Particles from Transport
Emissions and Child Health (UPTECH) project. Two weeks of continuous
measurements were taken at each of a number of government primary schools in
the Brisbane Metropolitan Area. The monitoring equipment was taken to each
school sequentially. The school data are augmented by data from long term
monitoring stations at three locations in Brisbane, Australia.
Fitting the model helps describe the spatial and temporal variability at a
subset of the UPTECH schools and the long-term monitoring sites. The temporal
variation is modelled hierarchically with penalised random walk terms, one
common to all sites and a term accounting for the remaining temporal trend at
each site. Parameter estimates and their uncertainty are computed in a
computationally efficient approximate Bayesian inference environment, R-INLA.
The temporal part of the model explains daily and weekly cycles in PNC at the
schools, which can be used to estimate the exposure of school children to
ultrafine particles (UFPs) emitted by vehicles. At each school and long-term
monitoring site, peaks in PNC can be attributed to the morning and afternoon
rush hour traffic and new particle formation events. The spatial component of
the model describes the school to school variation in mean PNC at each school
and within each school ground. It is shown how the spatial model can be
expanded to identify spatial patterns at the city scale with the inclusion of
more spatial locations.Comment: Draft of this paper presented at ISBA 2012 as poster, part of UPTECH
projec
Synthetic LISA: Simulating Time Delay Interferometry in a Model LISA
We report on three numerical experiments on the implementation of Time-Delay
Interferometry (TDI) for LISA, performed with Synthetic LISA, a C++/Python
package that we developed to simulate the LISA science process at the level of
scientific and technical requirements. Specifically, we study the laser-noise
residuals left by first-generation TDI when the LISA armlengths have a
realistic time dependence; we characterize the armlength-measurements
accuracies that are needed to have effective laser-noise cancellation in both
first- and second-generation TDI; and we estimate the quantization and
telemetry bitdepth needed for the phase measurements. Synthetic LISA generates
synthetic time series of the LISA fundamental noises, as filtered through all
the TDI observables; it also provides a streamlined module to compute the TDI
responses to gravitational waves according to a full model of TDI, including
the motion of the LISA array and the temporal and directional dependence of the
armlengths. We discuss the theoretical model that underlies the simulation, its
implementation, and its use in future investigations on system characterization
and data-analysis prototyping for LISA.Comment: 18 pages, 14 EPS figures, REVTeX 4. Accepted PRD version. See
http://www.vallis.org/syntheticlisa for information on the Synthetic LISA
software packag
Retrodiction as a tool for micromaser field measurements
We use retrodictive quantum theory to describe cavity field measurements by
successive atomic detections in the micromaser. We calculate the state of the
micromaser cavity field prior to detection of sequences of atoms in either the
excited or ground state, for atoms that are initially prepared in the excited
state. This provides the POM elements, which describe such sequences of
measurements.Comment: 20 pages, 4(8) figure
The effects of an alpha-2-adrenoceptor agonist, antagonist, and their combination on the blood insulin, glucose, and glucagon concentrations in insulin sensitive and dysregulated horses
Alpha-2-adrenoceptor agonists are sedatives that can cause fluctuations in serum insulin and blood glucose (BG) concentrations in horses. The objectives of this study were to investigate the effects of detomidine and vatinoxan on BG, insulin, and glucagon concentrations in horses with and without insulin dysregulation (ID). In a blinded cross-over design, eight horses with ID and eight horses without ID were assigned to each of four treatments: detomidine (0.02 mg/kg; DET), vatinoxan (0.2 mg/kg; VAT), detomidine + vatinoxan (DET + VAT), and saline control (SAL). Blood samples were taken at 0,1, 2, 4, 6, and 8 h. Change from baseline was used as the response in modelling, and the differences between treatments were evaluated with repeated measures analysis of covariance. P values 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).Peer reviewe
The Kinetic Sunyaev-Zel'dovich Effect from Radiative Transfer Simulations of Patchy Reionization
We present the first calculation of the kinetic Sunyaev-Zel'dovich (kSZ)
effect due to the inhomogeneous reionization of the universe based on detailed
large-scale radiative transfer simulations of reionization. The resulting sky
power spectra peak at l=2000-8000 with maximum values of
l^2C_l~1\times10^{-12}. The peak scale is determined by the typical size of the
ionized regions and roughly corresponds to the ionized bubble sizes observed in
our simulations, ~5-20 Mpc. The kSZ anisotropy signal from reionization
dominates the primary CMB signal above l=3000. This predicted kSZ signal at
arcminute scales is sufficiently strong to be detectable by upcoming
experiments, like the Atacama Cosmology Telescope and South Pole Telescope
which are expected to have ~1' resolution and ~muK sensitivity. The extended
and patchy nature of the reionization process results in a boost of the peak
signal in power by approximately one order of magnitude compared to a uniform
reionization scenario, while roughly tripling the signal compared with that
based upon the assumption of gradual but spatially uniform reionization. At
large scales the patchy kSZ signal depends largely on the ionizing source
efficiencies and the large-scale velocity fields: sources which produce photons
more efficiently yield correspondingly higher signals. The introduction of
sub-grid gas clumping in the radiative transfer simulations produces
significantly more power at small scales, and more non-Gaussian features, but
has little effect at large scales. The patchy nature of the reionization
process roughly doubles the total observed kSZ signal for l~3000-10^4 compared
to non-patchy scenarios with the same total electron-scattering optical depth.Comment: 14 pages, 13 figures (some in color), submitted to Ap
- …