310 research outputs found
Sequential Implementation of Monte Carlo Tests with Uniformly Bounded Resampling Risk
This paper introduces an open-ended sequential algorithm for computing the
p-value of a test using Monte Carlo simulation. It guarantees that the
resampling risk, the probability of a different decision than the one based on
the theoretical p-value, is uniformly bounded by an arbitrarily small constant.
Previously suggested sequential or non-sequential algorithms, using a bounded
sample size, do not have this property. Although the algorithm is open-ended,
the expected number of steps is finite, except when the p-value is on the
threshold between rejecting and not rejecting. The algorithm is suitable as
standard for implementing tests that require (re-)sampling. It can also be used
in other situations: to check whether a test is conservative, iteratively to
implement double bootstrap tests, and to determine the sample size required for
a certain power.Comment: Major Revision 15 pages, 4 figure
Patchy He II reionization and the physical state of the IGM
We present a Monte-Carlo model of He II reionization by QSOs and its effect
on the thermal state of the clumpy intergalactic medium (IGM). The model
assumes that patchy reionization develops as a result of the discrete
distribution of QSOs. It includes various recipes for the propagation of the
ionizing photons, and treats photo-heating self-consistently. The model
provides the fraction of He III, the mean temperature in the IGM, and the He II
mean optical depth -- all as a function of redshift. It also predicts the
evolution of the local temperature versus density relation during reionization.
Our findings are as follows: The fraction of He III increases gradually until
it becomes close to unity at . The He II mean optical depth
decreases from at to at .
The mean temperature rises gradually between and and
declines slowly at lower redshifts. The model predicts a flattening of the
temperature-density relation with significant increase in the scatter during
reionization at . Towards the end of reionization the scatter is
reduced and a tight relation is re-established. This scatter should be
incorporated in the analysis of the Ly forest at . Comparison
with observational results of the optical depth and the mean temperature at
moderate redshifts constrains several key physical parameters.Comment: 18 pages, 9 figures; Changed content. Accepted for publication in
MNRA
Minimax Estimation of a Normal Mean Vector for Arbitrary Quadratic Loss and Unknown Covariance Matrix
Let X be an observation from a p-variate normal distribution (p ≧ 3) with mean vector θ and unknown positive definite covariance matrix Σ̸. It is desired to estimate θ under the quadratic loss L(δ,θ,Σ̸)=(δ−θ)tQ(δ−θ)/tr(QΣ̸), where Q is a known positive definite matrix. Estimators of the following form are considered:
δc(X,W)=(I−cαQ−1W−1/(XtW−1X))X,
where W is a p × p random matrix with a Wishart (Σ̸,n) distribution (independent of X), α is the minimum characteristic root of (QW)/( n−p−1) and c is a positive constant. For appropriate values of c,δc is shown to be minimax and better than the usual estimator δ0(X)=X
Similarity between persons and related problems of profile analysis report No. 2
Study performed under Contract N6ori-07135 with the Bureau of Naval Research.Cover title
Coherent frequentism
By representing the range of fair betting odds according to a pair of
confidence set estimators, dual probability measures on parameter space called
frequentist posteriors secure the coherence of subjective inference without any
prior distribution. The closure of the set of expected losses corresponding to
the dual frequentist posteriors constrains decisions without arbitrarily
forcing optimization under all circumstances. This decision theory reduces to
those that maximize expected utility when the pair of frequentist posteriors is
induced by an exact or approximate confidence set estimator or when an
automatic reduction rule is applied to the pair. In such cases, the resulting
frequentist posterior is coherent in the sense that, as a probability
distribution of the parameter of interest, it satisfies the axioms of the
decision-theoretic and logic-theoretic systems typically cited in support of
the Bayesian posterior. Unlike the p-value, the confidence level of an interval
hypothesis derived from such a measure is suitable as an estimator of the
indicator of hypothesis truth since it converges in sample-space probability to
1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly
extended to vector parameters of interest. The derivation of upper and lower
confidence levels from valid and nonconservative set estimators is formalize
Inference for bounded parameters
The estimation of signal frequency count in the presence of background noise
has had much discussion in the recent physics literature, and Mandelkern [1]
brings the central issues to the statistical community, leading in turn to
extensive discussion by statisticians. The primary focus however in [1] and the
accompanying discussion is on the construction of a confidence interval. We
argue that the likelihood function and -value function provide a
comprehensive presentation of the information available from the model and the
data. This is illustrated for Gaussian and Poisson models with lower bounds for
the mean parameter
De-contamination of cosmological 21-cm maps
We present a method for extracting the expected cosmological 21-cm signal
from the epoch of reionization, taking into account contaminating radiations
and random instrumental noise. The method is based on the maximum a-posteriori
probability (MAP) formalism and employs the coherence of the contaminating
radiation along the line-of-sight and the three-dimensional correlations of the
cosmological signal. We test the method using a detailed and comprehensive
modeling of the cosmological 21-cm signal and the contaminating radiation. The
signal is obtained using a high resolution N-body simulation where the gas is
assumed to trace the dark matter and is reionized by stellar radiation computed
from semi-analytic galaxy formation recipes. We model contaminations to the
cosmological signal from synchrotron and free-free galactic foregrounds and
extragalactic sources including active galactic nuclei, radio haloes and
relics, synchrotron and free-free emission from star forming galaxies, and
free-free emission from dark matter haloes and the intergalactic medium. We
provide tests of the reconstruction method for several rms values of
instrumental noise from to 250 mK. For low instrumental noise,
the recovered signal, along individual lines-of-sight, fits the true
cosmological signal with a mean rms difference of
for mK, and for mK.
The one-dimensional power spectrum is nicely reconstructed for all values of
considered here, while the reconstruction of the two-dimensional
power spectrum and the Minkowski functionals is good only for noise levels of
the order of few mK.Comment: 19 pages, 17 figures, accepted for publication in MNRA
Detection and extraction of signals from the epoch of reionization using higher-order one-point statistics
Detecting redshifted 21-cm emission from neutral hydrogen in the early Universe promises to give direct constraints on the epoch of reionization (EoR). It will, though, be very challenging to extract the cosmological signal (CS) from foregrounds and noise which are orders of magnitude larger. Fortunately, the signal has some characteristics which differentiate it from the foregrounds and noise, and we suggest that using the correct statistics may tease out signatures of reionization. We generate mock data cubes simulating the output of the Low Frequency Array (LOFAR) EoR experiment. These cubes combine realistic models for Galactic and extragalactic foregrounds and the noise with three different simulations of the CS. We fit out the foregrounds, which are smooth in the frequency direction, to produce residual images in each frequency band. We denoise these images and study the skewness of the one-point distribution in the images as a function of frequency. We find that, under sufficiently optimistic assumptions, we can recover the main features of the redshift evolution of the skewness in the 21-cm signal. We argue that some of these features ¿ such as a dip at the onset of reionization, followed by a rise towards its later stages ¿ may be generic, and give us a promising route to a statistical detection of reionization
Measuring Program Outcome
The Progress Evaluation Scales (PES) provide an efficient measuring devicefor evaluating current functioning, setting treatment goals, and assessing change over time in clinically relevant aspects of personal, social, and community adjustment. The PES can be completed by patients, significant others, and therapists, making it possible to obtain various points of view of the outcome of mental health services. This article describes the seven domains measured by the PES and the underlying dimensions they were designed to tap, and presents the generalizability, validity, and usefulness of the scales as applied to an adult mental health center population.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67322/2/10.1177_0193841X8100500402.pd
A frequentist framework of inductive reasoning
Reacting against the limitation of statistics to decision procedures, R. A.
Fisher proposed for inductive reasoning the use of the fiducial distribution, a
parameter-space distribution of epistemological probability transferred
directly from limiting relative frequencies rather than computed according to
the Bayes update rule. The proposal is developed as follows using the
confidence measure of a scalar parameter of interest. (With the restriction to
one-dimensional parameter space, a confidence measure is essentially a fiducial
probability distribution free of complications involving ancillary statistics.)
A betting game establishes a sense in which confidence measures are the only
reliable inferential probability distributions. The equality between the
probabilities encoded in a confidence measure and the coverage rates of the
corresponding confidence intervals ensures that the measure's rule for
assigning confidence levels to hypotheses is uniquely minimax in the game.
Although a confidence measure can be computed without any prior distribution,
previous knowledge can be incorporated into confidence-based reasoning. To
adjust a p-value or confidence interval for prior information, the confidence
measure from the observed data can be combined with one or more independent
confidence measures representing previous agent opinion. (The former confidence
measure may correspond to a posterior distribution with frequentist matching of
coverage probabilities.) The representation of subjective knowledge in terms of
confidence measures rather than prior probability distributions preserves
approximate frequentist validity.Comment: major revisio
- …