14 research outputs found
Characterization of Bayes procedures for multiple endpoint problems and inadmissibility of the step-up procedure
The problem of multiple endpoint testing for k endpoints is treated as a 2^k
finite action problem. The loss function chosen is a vector loss function
consisting of two components. The two components lead to a vector risk. One
component of the vector risk is the false rejection rate (FRR), that is, the
expected number of false rejections. The other component is the false
acceptance rate (FAR), that is, the expected number of acceptances for which
the corresponding null hypothesis is false. This loss function is more
stringent than the positive linear combination loss function of Lehmann [Ann.
Math. Statist. 28 (1957) 1-25] and Cohen and Sackrowitz [Ann. Statist. (2005)
33 126-144] in the sense that the class of admissible rules is larger for this
vector risk formulation than for the linear combination risk function. In other
words, fewer procedures are inadmissible for the vector risk formulation. The
statistical model assumed is that the vector of variables Z is multivariate
normal with mean vector \mu and known intraclass covariance matrix \Sigma. The
endpoint hypotheses are H_i:\mu_i=0 vs K_i:\mu_i>0, i=1,...,k. A
characterization of all symmetric Bayes procedures and their limits is
obtained. The characterization leads to a complete class theorem. The complete
class theorem is used to provide a useful necessary condition for admissibility
of a procedure. The main result is that the step-up multiple endpoint procedure
is shown to be inadmissible.Comment: Published at http://dx.doi.org/10.1214/009053604000000986 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Decision theory results for one-sided multiple comparison procedures
A resurgence of interest in multiple hypothesis testing has occurred in the
last decade. Motivated by studies in genomics, microarrays, DNA sequencing,
drug screening, clinical trials, bioassays, education and psychology,
statisticians have been devoting considerable research energy in an effort to
properly analyze multiple endpoint data. In response to new applications, new
criteria and new methodology, many ad hoc procedures have emerged. The
classical requirement has been to use procedures which control the strong
familywise error rate (FWE) at some predetermined level \alpha. That is, the
probability of any false rejection of a true null hypothesis should be less
than or equal to \alpha. Finding desirable and powerful multiple test
procedures is difficult under this requirement. One of the more recent ideas is
concerned with controlling the false discovery rate (FDR), that is, the
expected proportion of rejected hypotheses which are, in fact, true. Many
multiple test procedures do control the FDR. A much earlier approach to
multiple testing was formulated by Lehmann [Ann. Math. Statist. 23 (1952)
541-552 and 28 (1957) 1-25]. Lehmann's approach is decision theoretic and he
treats the multiple endpoints problem as a 2^k finite action problem when there
are k endpoints. This approach is appealing since unlike the FWE and FDR
criteria, the finite action approach pays attention to false acceptances as
well as false rejections.Comment: Published at http://dx.doi.org/10.1214/009053604000000968 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A new multiple testing method in the dependent case
The most popular multiple testing procedures are stepwise procedures based on
-values for individual test statistics. Included among these are the false
discovery rate (FDR) controlling procedures of Benjamini--Hochberg [J. Roy.
Statist. Soc. Ser. B 57 (1995) 289--300] and their offsprings. Even for models
that entail dependent data, -values based on marginal distributions are
used. Unlike such methods, the new method takes dependency into account at all
stages. Furthermore, the -value procedures often lack an intuitive convexity
property, which is needed for admissibility. Still further, the new methodology
is computationally feasible. If the number of tests is large and the proportion
of true alternatives is less than say 25 percent, simulations demonstrate a
clear preference for the new methodology. Applications are detailed for models
such as testing treatments against control (or any intraclass correlation
model), testing for change points and testing means when correlation is
successive.Comment: Published in at http://dx.doi.org/10.1214/08-AOS616 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Multiple testing of two-sided alternatives with dependent data’, Statistica Sinica 18
Abstract: Multiple testing procedures have become an integral element of analysis in many practical problems. The development of sound procedures has become an important statistical issue. Many procedures have been suggested and many criteria of goodness have been used. Most recent procedures are stepwise in nature. Perhaps the most fundamental (and typically overlooked) issue is the behavior of the multiple testing procedure as it relates to each individual testing problem. In this paper we study two of the most popular stepwise procedures. We demonstrate that the individual tests they induce are inadmissible in some important two-sided testing models when correlation is present. That is, for each individual hypothesis testing problem, a test exists whose size is less than or equal that of the stepwise procedure test and whose power is greater than or equal that of the stepwise procedure test with some strict inequality. This means that the overall multiple testing procedure is inadmissible whenever a loss based on the number of Type I and Type II errors is used
Tests for independence in contingency tables with ordered categories
Consider an r - c contingency table under the full multinomial model where each category is ordered. The problem is to test the null hypothesis of independence against the alternative that all local log odds ratios are nonnegative with a least one local log odds ratio positive. We find the class of all tests that are simultaneously exact, unbiased, and admissible. The problem is of considerable interest to social scientists. Some discussion of specific tests is given.contingency table exact test unbiased test similar test Neyman structure multivariate totally positive of order two FKG inequality admissibility stochastic ordering
Wherefore similar tests?
Similarity of a test is often a necessary condition for a test to be unbiased (in particular for a test to be uniformly most powerful unbiased when such a test exists). Lehmann (Testing Statistical Hypotheses, 2nd Edition, Wiley, New York, 1986) describes the connection between similar tests and uniformly most powerful unbiased tests. The methods to achieve these properties as outlined in Lehmann are used extensively. In any case, an admissible similar test is frequently one that can be recommended for practical use. In some constrained parameter spaces however, we show that admissible similar tests sometimes completely ignore the constraints. In some of these cases we call such tests constraint insensitive. The tests seem not to be intuitive and perhaps should not be used. On the other hand, there are models with constrained parameter spaces where similar tests do take into account the constraints. In these cases the admissible test is called constraint sensitive. We offer a systematic approach that enables one to determine whether an admissible similar test is constraint insensitive or not. The approach is applied to three classes of models involving order restricted parameters. The models include testing for homogeneity of parameters, testing subsets of parameters, and testing goodness of fit of a family of discrete distributions.Order restricted inference Uniformly most powerful tests Constraint insensitive Complete sufficient statistics Interference in genetic maps
Two stage conditionally unbiased estimators of the selected mean
The problem is to estimate the mean of the selected population. The selection rule is to choose the population with the largest sample mean when such sample means are calculated from the first stage sample. An estimator of the selected mean is unbiased if its expected value equals the expected value of the selected mean. We seek conditionally unbiased estimators of the selected mean given the ordering of the set of sample means based on the first stage sample. Conditionally unbiased estimators are of course unconditionally unbiased. For several distributions such as the normal, with unknown mean, and binomial, no conditionally unbiased estimators exist based on a one stage sample. We propose a two stage sample where observations at stage two are taken from the selected population only. Such a procedure has the advantage of yielding conditionally unbiased estimators and enables, possibly a better allocation of available sample points. We find the uniformly minimum variance conditionally unbiased estimators (UMVCUE) for the normal case when the variance is known or when a common unknown variance is present. We also find the UMVCUE for the gamma case and indicate that the method is suitable for many other cases as well.unbiased estimators two stage sample selected mean uniformly minimum variance conditionally unbiased estimator
Admissibility of goodness of fit tests for discrete exponential families
Consider testing the composite hypothesis that a population is a particular discrete exponential family. For example, a Poisson distribution with unknown parameter. We find a sufficient condition for the admissibility of goodness of fit tests. Included in the class of admissible tests is the usual chi -- square test of goodness of fit.goodness of fit admissibility chi-square test