10 research outputs found
Large-Scale Multiple Testing of Composite Null Hypotheses Under Heteroskedasticity
Heteroskedasticity poses several methodological challenges in designing valid
and powerful procedures for simultaneous testing of composite null hypotheses.
In particular, the conventional practice of standardizing or re-scaling
heteroskedastic test statistics in this setting may severely affect the power
of the underlying multiple testing procedure. Additionally, when the
inferential parameter of interest is correlated with the variance of the test
statistic, methods that ignore this dependence may fail to control the type I
error at the desired level. We propose a new Heteroskedasticity Adjusted
Multiple Testing (HAMT) procedure that avoids data reduction by
standardization, and directly incorporates the side information from the
variances into the testing procedure. Our approach relies on an improved
nonparametric empirical Bayes deconvolution estimator that offers a practical
strategy for capturing the dependence between the inferential parameter of
interest and the variance of the test statistic. We develop theory to show that
HAMT is asymptotically valid and optimal for FDR control. Simulation results
demonstrate that HAMT outperforms existing procedures with substantial power
gain across many settings at the same FDR level. The method is illustrated on
an application involving the detection of engaged users on a mobile game app
Harnessing The Collective Wisdom: Fusion Learning Using Decision Sequences From Diverse Sources
Learning from the collective wisdom of crowds enhances the transparency of
scientific findings by incorporating diverse perspectives into the
decision-making process. Synthesizing such collective wisdom is related to the
statistical notion of fusion learning from multiple data sources or studies.
However, fusing inferences from diverse sources is challenging since
cross-source heterogeneity and potential data-sharing complicate statistical
inference. Moreover, studies may rely on disparate designs, employ widely
different modeling techniques for inferences, and prevailing data privacy norms
may forbid sharing even summary statistics across the studies for an overall
analysis. In this paper, we propose an Integrative Ranking and Thresholding
(IRT) framework for fusion learning in multiple testing. IRT operates under the
setting where from each study a triplet is available: the vector of binary
accept-reject decisions on the tested hypotheses, the study-specific False
Discovery Rate (FDR) level and the hypotheses tested by the study. Under this
setting, IRT constructs an aggregated, nonparametric, and discriminatory
measure of evidence against each null hypotheses, which facilitates ranking the
hypotheses in the order of their likelihood of being rejected. We show that IRT
guarantees an overall FDR control under arbitrary dependence between the
evidence measures as long as the studies control their respective FDR at the
desired levels. Furthermore, IRT synthesizes inferences from diverse studies
irrespective of the underlying multiple testing algorithms employed by them.
While the proofs of our theoretical statements are elementary, IRT is extremely
flexible, and a comprehensive numerical study demonstrates that it is a
powerful framework for pooling inferences.Comment: 29 pages and 10 figures. Under review at a journa
Nonparametric Empirical Bayes Estimation on Heterogeneous Data
The simultaneous estimation of many parameters based on data collected from
corresponding studies is a key research problem that has received renewed
attention in the high-dimensional setting. Many practical situations involve
heterogeneous data where heterogeneity is captured by a nuisance parameter.
Effectively pooling information across samples while correctly accounting for
heterogeneity presents a significant challenge in large-scale estimation
problems. We address this issue by introducing the "Nonparametric Empirical
Bayes Structural Tweedie" (NEST) estimator, which efficiently estimates the
unknown effect sizes and properly adjusts for heterogeneity via a generalized
version of Tweedie's formula. For the normal means problem, NEST simultaneously
handles the two main selection biases introduced by heterogeneity: one, the
selection bias in the mean, which cannot be effectively corrected without also
correcting for, two, selection bias in the variance. Our theoretical results
show that NEST has strong asymptotic properties without requiring explicit
assumptions about the prior. Extensions to other two-parameter members of the
exponential family are discussed. Simulation studies show that NEST outperforms
competing methods, with much efficiency gains in many settings. The proposed
method is demonstrated on estimating the batting averages of baseball players
and Sharpe ratios of mutual fund returns.Comment: 66 pages including 33 pages of main text, 5 pages of bibliography,
and 29 pages of supplementary tex
Mass Cytometric Analysis of HIV Entry, Replication, and Remodeling in Tissue CD4+ T Cells
To characterize susceptibility to HIV infection, we phenotyped infected tonsillar T cells by single-cell mass cytometry and created comprehensive maps to identify which subsets of CD4+ T cells support HIV fusion and productive infection. By comparing HIV-fused and HIV-infected cells through dimensionality reduction, clustering, and statistical approaches to account for viral perturbations, we identified a subset of memory CD4+ T cells that support HIV entry but not viral gene expression. These cells express high levels of CD127, the IL-7 receptor, and are believed to be long-lived lymphocytes. In HIV-infected patients, CD127-expressing cells preferentially localize to extrafollicular lymphoid regions with limited viral replication. Thus, CyTOF-based phenotyping, combined with analytical approaches to distinguish between selective infection and receptor modulation by viruses, can be used as a discovery tool