184,783 research outputs found
A class of optimal tests for symmetry based on local Edgeworth approximations
The objective of this paper is to provide, for the problem of univariate
symmetry (with respect to specified or unspecified location), a concept of
optimality, and to construct tests achieving such optimality. This requires
embedding symmetry into adequate families of asymmetric (local) alternatives.
We construct such families by considering non-Gaussian generalizations of
classical first-order Edgeworth expansions indexed by a measure of skewness
such that (i) location, scale and skewness play well-separated roles
(diagonality of the corresponding information matrices) and (ii) the classical
tests based on the Pearson--Fisher coefficient of skewness are optimal in the
vicinity of Gaussian densities.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ298 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Divergence from, and Convergence to, Uniformity of Probability Density Quantiles
The probability density quantile (pdQ) carries essential information
regarding shape and tail behavior of a location-scale family. Convergence of
repeated applications of the pdQ mapping to the uniform distribution is
investigated and new fixed point theorems are established. The Kullback-Leibler
divergences from uniformity of these pdQs are mapped and found to be
ingredients in power functions of optimal tests for uniformity against
alternative shapes.Comment: 13 pages, 2 figures. arXiv admin note: substantial text overlap with
arXiv:1605.0018
Harold Jeffreys's Theory of Probability Revisited
Published exactly seventy years ago, Jeffreys's Theory of Probability (1939)
has had a unique impact on the Bayesian community and is now considered to be
one of the main classics in Bayesian Statistics as well as the initiator of the
objective Bayes school. In particular, its advances on the derivation of
noninformative priors as well as on the scaling of Bayes factors have had a
lasting impact on the field. However, the book reflects the characteristics of
the time, especially in terms of mathematical rigor. In this paper we point out
the fundamental aspects of this reference work, especially the thorough
coverage of testing problems and the construction of both estimation and
testing noninformative priors based on functional divergences. Our major aim
here is to help modern readers in navigating in this difficult text and in
concentrating on passages that are still relevant today.Comment: This paper commented in: [arXiv:1001.2967], [arXiv:1001.2968],
[arXiv:1001.2970], [arXiv:1001.2975], [arXiv:1001.2985], [arXiv:1001.3073].
Rejoinder in [arXiv:0909.1008]. Published in at
http://dx.doi.org/10.1214/09-STS284 the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Data-driven smooth tests when the hypothesis Is composite
In recent years several authors have recommended smooth tests for testing goodness of fit. However, the number of components in the smooth test statistic should be chosen well; otherwise, considerable loss of power may occur. Schwarz's selection rule provides one such good choice. Earlier results on simple null hypotheses are extended here to composite hypotheses, which tend to be of more practical interest. For general composite hypotheses, consistency of the data-driven smooth tests holds at essentially any alternative. Monte Carlo experiments on testing exponentiality and normality show that the data-driven version of Neyman's test compares well to other, even specialized, tests over a wide range of alternatives
- …