15,664 research outputs found
A comparison of the Benjamini-Hochberg procedure with some Bayesian rules for multiple testing
In the spirit of modeling inference for microarrays as multiple testing for
sparse mixtures, we present a similar approach to a simplified version of
quantitative trait loci (QTL) mapping. Unlike in case of microarrays, where the
number of tests usually reaches tens of thousands, the number of tests
performed in scans for QTL usually does not exceed several hundreds. However,
in typical cases, the sparsity of significant alternatives for QTL mapping
is in the same range as for microarrays. For methodological interest, as well
as some related applications, we also consider non-sparse mixtures. Using
simulations as well as theoretical observations we study false discovery rate
(FDR), power and misclassification probability for the Benjamini-Hochberg (BH)
procedure and its modifications, as well as for various parametric and
nonparametric Bayes and Parametric Empirical Bayes procedures. Our results
confirm the observation of Genovese and Wasserman (2002) that for small p the
misclassification error of BH is close to optimal in the sense of attaining the
Bayes oracle. This property is shared by some of the considered Bayes testing
rules, which in general perform better than BH for large or moderate 's.Comment: Published in at http://dx.doi.org/10.1214/193940307000000158 the IMS
Collections (http://www.imstat.org/publications/imscollections.htm) by the
Institute of Mathematical Statistics (http://www.imstat.org
Asymptotic Bayes-optimality under sparsity of some multiple testing procedures
Within a Bayesian decision theoretic framework we investigate some asymptotic
optimality properties of a large class of multiple testing rules. A parametric
setup is considered, in which observations come from a normal scale mixture
model and the total loss is assumed to be the sum of losses for individual
tests. Our model can be used for testing point null hypotheses, as well as to
distinguish large signals from a multitude of very small effects. A rule is
defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our
chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes
oracle (a rule which minimizes the Bayes risk) converges to one. Our main
interest is in the asymptotic scheme where the proportion p of "true"
alternatives converges to zero. We fully characterize the class of fixed
threshold multiple testing rules which are ABOS, and hence derive conditions
for the asymptotic optimality of rules controlling the Bayesian False Discovery
Rate (BFDR). We finally provide conditions under which the popular
Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a
wide class of sparsity levels, the threshold of the former can be approximated
by a nonrandom threshold.Comment: Published in at http://dx.doi.org/10.1214/10-AOS869 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Parameter tuning in pointwise adaptation using a propagation approach
This paper discusses the problem of adaptive estimation of a univariate
object like the value of a regression function at a given point or a linear
functional in a linear inverse problem. We consider an adaptive procedure
originated from Lepski [Theory Probab. Appl. 35 (1990) 454--466.] that selects
in a data-driven way one estimate out of a given class of estimates ordered by
their variability. A serious problem with using this and similar procedures is
the choice of some tuning parameters like thresholds. Numerical results show
that the theoretically recommended proposals appear to be too conservative and
lead to a strong oversmoothing effect. A careful choice of the parameters of
the procedure is extremely important for getting the reasonable quality of
estimation. The main contribution of this paper is the new approach for
choosing the parameters of the procedure by providing the prescribed behavior
of the resulting estimate in the simple parametric situation. We establish a
non-asymptotical "oracle" bound, which shows that the estimation risk is, up to
a logarithmic multiplier, equal to the risk of the "oracle" estimate that is
optimally selected from the given family. A numerical study demonstrates a good
performance of the resulting procedure in a number of simulated examples.Comment: Published in at http://dx.doi.org/10.1214/08-AOS607 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Recommended from our members
Covariate-assisted ranking and screening for large-scale two-sample inference
Two-sample multiple testing has a wide range of applications. The conventionalpractice first reduces the original observations to a vector of p-values and then chooses a cutoffto adjust for multiplicity. However, this data reduction step could cause significant loss ofinformation and thus lead to suboptimal testing procedures.We introduce a new framework fortwo-sample multiple testing by incorporating a carefully constructed auxiliary variable in inferenceto improve the power. A data-driven multiple-testing procedure is developed by employinga covariate-assisted ranking and screening (CARS) approach that optimally combines the informationfrom both the primary and the auxiliary variables. The proposed CARS procedureis shown to be asymptotically valid and optimal for false discovery rate control. The procedureis implemented in the R package CARS. Numerical results confirm the effectiveness of CARSin false discovery rate control and show that it achieves substantial power gain over existingmethods. CARS is also illustrated through an application to the analysis of a satellite imagingdata set for supernova detection
- …