1,973 research outputs found
Higher Accuracy for Bayesian and Frequentist Inference: Large Sample Theory for Small Sample Likelihood
Recent likelihood theory produces -values that have remarkable accuracy
and wide applicability. The calculations use familiar tools such as maximum
likelihood values (MLEs), observed information and parameter rescaling. The
usual evaluation of such -values is by simulations, and such simulations do
verify that the global distribution of the -values is uniform(0, 1), to high
accuracy in repeated sampling. The derivation of the -values, however,
asserts a stronger statement, that they have a uniform(0, 1) distribution
conditionally, given identified precision information provided by the data. We
take a simple regression example that involves exact precision information and
use large sample techniques to extract highly accurate information as to the
statistical position of the data point with respect to the parameter:
specifically, we examine various -values and Bayesian posterior survivor
-values for validity. With observed data we numerically evaluate the various
-values and -values, and we also record the related general formulas. We
then assess the numerical values for accuracy using Markov chain Monte Carlo
(McMC) methods. We also propose some third-order likelihood-based procedures
for obtaining means and variances of Bayesian posterior distributions, again
followed by McMC assessment. Finally we propose some adaptive McMC methods to
improve the simulation acceptance rates. All these methods are based on
asymptotic analysis that derives from the effect of additional data. And the
methods use simple calculations based on familiar maximizing values and related
informations. The example illustrates the general formulas and the ease of
calculations, while the McMC assessments demonstrate the numerical validity of
the -values as percentage position of a data point. The example, however, is
very simple and transparent, and thus gives little indication that in a wide
generality of models the formulas do accurately separate information for almost
any parameter of interest, and then do give accurate -value determinations
from that information. As illustration an enigmatic problem in the literature
is discussed and simulations are recorded; various examples in the literature
are cited.Comment: Published in at http://dx.doi.org/10.1214/07-STS240 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Fermat, Schubert, Einstein, and Behrens-Fisher: The Probable Difference Between Two Means When Ï_1^2â Ï_2^2
The history of the Behrens-Fisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the Behrens- Fisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. The focus is better placed on âshift in locationâ and, more importantly, âshift in location and change in scaleâ treatment alternatives
Some solutions to the multivariate Behrens-Fisher problem for dissimilarity-based analyses
The essence of the generalised multivariate BehrensâFisher problem (BFP) is how to test the null hypothesis of equality of mean vectors for two or more populations when their dispersion matrices differ. Solutions to the BFP usually assume variables are multivariate normal and do not handle highâdimensional data. In ecology, species' count data are often highâdimensional, nonânormal and heterogeneous. Also, interest lies in analysing compositional dissimilarities among whole communities in nonâEuclidean (semiâmetric or nonâmetric) multivariate space. Hence, dissimilarityâbased tests by permutation (e.g., PERMANOVA, ANOSIM) are used to detect differences among groups of multivariate samples. Such tests are not robust, however, to heterogeneity of dispersions in the space of the chosen dissimilarity measure, most conspicuously for unbalanced designs. Here, we propose a modification to the PERMANOVA test statistic, coupled with either permutation or bootstrap resampling methods, as a solution to the BFP for dissimilarityâbased tests. Empirical simulations demonstrate that the type I error remains close to nominal significance levels under classical scenarios known to cause problems for the unâmodified test. Furthermore, the permutation approach is found to be more powerful than the (more conservative) bootstrap for detecting changes in community structure for real ecological datasets. The utility of the approach is shown through analysis of 809 species of benthic softâsediment invertebrates from 101 sites in five areas spanning 1960 km along the Norwegian continental shelf, based on the Jaccard dissimilarity measure
Fermat, Schubert, Einstein, and Behrens-Fisher: The Probable Difference Between Two Means When Ï\u3csub\u3e1\u3c/sub\u3e\u3csup\u3e2\u3c/sup\u3eâ Ï\u3csub\u3e2\u3c/sub\u3e\u3csup\u3e2\u3c/sup\u3e
The history of the Behrens-Fisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the Behrens- Fisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. The focus is better placed on âshift in locationâ and, more importantly, âshift in location and change in scaleâ treatment alternatives
Robustness and Power Comparison of the Mood-Westenberg and Siegel-Tukey Tests
The Mood-Westenberg and Siegel-Tukey tests were examined to determine their robustness with respect to Type-I error for detecting variance changes when their assumptions of equal means were slightly violated, a condition that approaches the Behrens-Fisher problem. Monte Carlo methods were used via 34,606 variations of sample sizes, α levels, distributions/data sets, treatments modeled as a change in scale, and treatments modeled as a shift in means. The Siegel-Tukey was the more robust, and was able to handle a more diverse set of conditions
- âŠ