48,487 research outputs found
On stepdown control of the false discovery proportion
Consider the problem of testing multiple null hypotheses. A classical
approach to dealing with the multiplicity problem is to restrict attention to
procedures that control the familywise error rate (), the probability of
even one false rejection. However, if is large, control of the is so
stringent that the ability of a procedure which controls the to detect
false null hypotheses is limited. Consequently, it is desirable to consider
other measures of error control. We will consider methods based on control of
the false discovery proportion () defined by the number of false
rejections divided by the total number of rejections (defined to be 0 if there
are no rejections). The false discovery rate proposed by Benjamini and Hochberg
(1995) controls . Here, we construct methods such that, for any
and , . Based on -values of
individual tests, we consider stepdown procedures that control the ,
without imposing dependence assumptions on the joint distribution of the
-values. A greatly improved version of a method given in Lehmann and Romano
\citer10 is derived and generalized to provide a means by which any sequence of
nondecreasing constants can be rescaled to ensure control of the . We also
provide a stepdown procedure that controls the under a dependence
assumption.Comment: Published at http://dx.doi.org/10.1214/074921706000000383 in the IMS
Lecture Notes--Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
Estimating False Discovery Proportion Under Arbitrary Covariance Dependence
Multiple hypothesis testing is a fundamental problem in high dimensional
inference, with wide applications in many scientific fields. In genome-wide
association studies, tens of thousands of tests are performed simultaneously to
find if any SNPs are associated with some traits and those tests are
correlated. When test statistics are correlated, false discovery control
becomes very challenging under arbitrary dependence. In the current paper, we
propose a novel method based on principal factor approximation, which
successfully subtracts the common dependence and weakens significantly the
correlation structure, to deal with an arbitrary dependence structure. We
derive an approximate expression for false discovery proportion (FDP) in large
scale multiple testing when a common threshold is used and provide a consistent
estimate of realized FDP. This result has important applications in controlling
FDR and FDP. Our estimate of realized FDP compares favorably with Efron
(2007)'s approach, as demonstrated in the simulated examples. Our approach is
further illustrated by some real data applications. We also propose a
dependence-adjusted procedure, which is more powerful than the fixed threshold
procedure.Comment: 51 pages, 7 figures. arXiv admin note: substantial text overlap with
arXiv:1012.439
Stepup procedures for control of generalizations of the familywise error rate
Consider the multiple testing problem of testing null hypotheses
. A classical approach to dealing with the multiplicity problem is
to restrict attention to procedures that control the familywise error rate
(), the probability of even one false rejection. But if is
large, control of the is so stringent that the ability of a
procedure that controls the to detect false null hypotheses is
limited. It is therefore desirable to consider other measures of error control.
This article considers two generalizations of the . The first is
the , in which one is willing to tolerate or more false
rejections for some fixed . The second is based on the false discovery
proportion (), defined to be the number of false rejections
divided by the total number of rejections (and defined to be 0 if there are no
rejections). Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995)
289--300] proposed control of the false discovery rate (), by
which they meant that, for fixed , . Here,
we consider control of the in the sense that, for fixed
and , . Beginning with any
nondecreasing sequence of constants and -values for the individual tests, we
derive stepup procedures that control each of these two measures of error
control without imposing any assumptions on the dependence structure of the
-values. We use our results to point out a few interesting connections with
some closely related stepdown procedures. We then compare and contrast two
-controlling procedures obtained using our results with the
stepup procedure for control of the of Benjamini and Yekutieli
[Ann. Statist. 29 (2001) 1165--1188].Comment: Published at http://dx.doi.org/10.1214/009053606000000461 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A stochastic process approach to false discovery control
This paper extends the theory of false discovery rates (FDR) pioneered by
Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300].
We develop a framework in which the False Discovery Proportion (FDP)--the
number of false rejections divided by the number of rejections--is treated as a
stochastic process. After obtaining the limiting distribution of the process,
we demonstrate the validity of a class of procedures for controlling the False
Discovery Rate (the expected FDP). We construct a confidence envelope for the
whole FDP process. From these envelopes we derive confidence thresholds, for
controlling the quantiles of the distribution of the FDP as well as controlling
the number of false discoveries. We also investigate methods for estimating the
p-value distribution
Asymptotic properties of false discovery rate controlling procedures under independence
We investigate the performance of a family of multiple comparison procedures
for strong control of the False Discovery Rate (). The
is the expected False Discovery Proportion (),
that is, the expected fraction of false rejections among all rejected
hypotheses. A number of refinements to the original Benjamini-Hochberg
procedure [1] have been proposed, to increase power by estimating the
proportion of true null hypotheses, either implicitly, leading to one-stage
adaptive procedures [4, 7] or explicitly, leading to two-stage adaptive (or
plug-in) procedures [2, 21]. We use a variant of the stochastic process
approach proposed by Genovese and Wasserman [11] to study the fluctuations of
the achieved with each of these procedures around its
expectation, for independent tested hypotheses. We introduce a framework for
the derivation of generic Central Limit Theorems for the of
these procedures, characterizing the associated regularity conditions, and
comparing the asymptotic power of the various procedures. We interpret recently
proposed one-stage adaptive procedures [4, 7] as fixed points in the iteration
of well known two-stage adaptive procedures [2, 21].Comment: Published in at http://dx.doi.org/10.1214/08-EJS207 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Interproximal Distance Analysis of Stereolithographic Casts Made by CAD-CAM Technology: An in Vitro Study
Statement of problem The accuracy of interproximal distances of the definitive casts made by computer-aided design and computer-aided manufacturing (CAD-CAM) technology is not yet known. Purpose The purpose of this in vitro study was to compare the interproximal distances of stereolithographic casts made by CAD-CAM technology with those of stone casts made by the conventional method. Material and methods Dentoform teeth were prepared for a single ceramic crown on the maxillary left central incisor, a 3-unit fixed dental prosthesis (FDP) on the second premolar for a metal-ceramic crown, and a maxillary right first molar for a metal crown. Twenty digital intraoral impressions were made on the dentoform with an intraoral digital impression scanner. The digital impression files were used to fabricate 20 sets of stereolithographic casts, 10 definitive casts for the single ceramic crown, and 10 definitive casts for the FDP. Furthermore, 20 stone casts were made by the conventional method using polyvinyl siloxane impression material with a custom tray. Each definitive cast for stereolithographic cast and stone cast consisted of removable die-sectioned casts (DC) and nonsectioned solid casts (SC). Measurements of interproximal distance of each cast were made using CAD software to provide mean ±standard deviation (SD) values. Data were first analyzed by repeated measures analysis of variance (ANOVA), using different methods of cast fabrication (stone and stereolithography) as one within subject factor and different cast types (DC and SC) as another within subject factor. Post hoc analyses were performed to investigate the differences between stone and stereolithographic casts depending upon the results from the repeated measures ANOVA (α=.05). Results Analysis of interproximal distances showed the mean ±SD value of the single ceramic crown group was 31.2 ±24.5 μm for stone casts and 261.0 ±116.1 μm for stereolithographic casts, whereas the mean ±SD value for the FDP group was 46.0 ±35.0 μm for stone casts and 292.8 ±216.6 μm for stereolithographic casts. For both the single ceramic crown and the FDP groups, there were significant differences in interproximal distances between stereolithographic casts and stone casts (P\u3c.001). In addition, the comparisons of DC with SC of stone and stereolithographic casts for the single ceramic crown and FDP groups demonstrated there was statistically significant differences among interproximal distances between DC stereolithographic casts and SC stereolithographic casts only for the FDP group (P\u3c.001). Conclusions For both the single ceramic crown and the FDP groups, the stereolithographic cast group showed significantly larger interproximal distances than the stone cast group. In terms of the comparison between DC and SC, DC stereolithographic casts for the FDP group only showed significantly larger interproximal values than those of the SC stereolithographic casts for the FDP group
- …
