835 research outputs found
Doubly robust confidence sequences for sequential causal inference
This paper derives time-uniform confidence sequences (CS) for causal effects
in experimental and observational settings. A confidence sequence for a target
parameter is a sequence of confidence intervals
such that every one of these intervals simultaneously captures with high
probability. Such CSs provide valid statistical inference for at
arbitrary stopping times, unlike classical fixed-time confidence intervals
which require the sample size to be fixed in advance. Existing methods for
constructing CSs focus on the nonasymptotic regime where certain assumptions
(such as known bounds on the random variables) are imposed, while doubly robust
estimators of causal effects rely on (asymptotic) semiparametric theory. We use
sequential versions of central limit theorem arguments to construct
large-sample CSs for causal estimands, with a particular focus on the average
treatment effect (ATE) under nonparametric conditions. These CSs allow analysts
to update inferences about the ATE in lieu of new data, and experiments can be
continuously monitored, stopped, or continued for any data-dependent reason,
all while controlling the type-I error. Finally, we describe how these CSs
readily extend to other causal estimands and estimators, providing a new
framework for sequential causal inference in a wide array of problems
Almost the Best of Three Worlds: Risk, Consistency and Optional Stopping for the Switch Criterion in Nested Model Selection
We study the switch distribution, introduced by Van Erven et al. (2012),
applied to model selection and subsequent estimation. While switching was known
to be strongly consistent, here we show that it achieves minimax optimal
parametric risk rates up to a factor when comparing two nested
exponential families, partially confirming a conjecture by Lauritzen (2012) and
Cavanaugh (2012) that switching behaves asymptotically like the Hannan-Quinn
criterion. Moreover, like Bayes factor model selection but unlike standard
significance testing, when one of the models represents a simple hypothesis,
the switch criterion defines a robust null hypothesis test, meaning that its
Type-I error probability can be bounded irrespective of the stopping rule.
Hence, switching is consistent, insensitive to optional stopping and almost
minimax risk optimal, showing that, Yang's (2005) impossibility result
notwithstanding, it is possible to `almost' combine the strengths of AIC and
Bayes factor model selection.Comment: To appear in Statistica Sinic
Recommended from our members
Sequential and Adaptive Inference Based on Martingale Concentration
Randomized experiments hold a well-deserved place at the top of the hierarchy of scientific evidence, and as such have received a great deal of attention from the statistical research community. In the simplest setting, a fixed group of subjects is available to the experimenter, who assigns one of two treatments to each subject via randomization, then observes corresponding outcomes. The goal is to draw inference about the effect of the experimental treatment on the observed outcome.Classical, frequentist statistical inference provides a powerful set of tools for this fixed-sample setting. We begin with an observed sample of some deterministic size and seek procedures which yield valid hypothesis tests, p-values, and confidence intervals---for example, a t-test of the null hypothesis that the experimental treatment has no effect, on average, or a corresponding confidence interval for the average treatment effect. The fixed-sample paradigm demands that we plan the experiment ahead of time, including the size of the experimental sample and the exact hypotheses to be tested, and that we adhere rigidly to this plan.In contrast, modern data analysis demands adaptivity. In particular, often the sample we choose to analyze is itself selected on the basis of observed data. For example, in an online A/B test, we may observe an ongoing stream of visitors enrolled into an experiment, so that the experimental sample is growing over time. The final experimental sample will include all of the visitors observed up to the time we decide to stop the experiment. The decision to stop could be made adaptively, by monitoring observed results and stopping early if a strong effect is observed, later if not. This is the realm of sequential, as opposed to fixed-sample, analysis.There are many other kinds of adaptivity that arise in practice. A second example is in the analysis of nonrandomized, or observational, studies of causal effects. In testing for statistical evidence of an effect, we may choose to focus on a subpopulation which we believe to be highly affected by the treatment of interest. For example, in studying the effect of fish consumption on mercury levels in the blood, we may focus on individuals whose diets are especially high in fish. Classical statistics requires that we define precisely which diets will be classified as "especially high in fish" before we analyze outcomes, but experimenters may prefer for this choice to be guided by the observed outcomes themselves.In both of the above examples---the sequential stopping of a randomized experiment and the adaptive choice of subgroup in an observational study---the use of fixed-sample methods, which do not account for adaptivity, will lead to violations of statistical guarantees such as false positive control. These violations are commonly included under the label "p-hacking" and have received much blame for the lack of reproducibility in various fields of scientific research. Fortunately, alternative statistical methods are available, methods that explicitly account for adaptivity to yield robust inference while placing fewer restrictions on the researcher. Such methods are the ultimate aim of the present work.This thesis develops a framework for constructing sequential and adaptive statistical procedures by taking advantage of the time-uniform concentration properties of certain martingales. Chapter 1 begins by laying out a mathematical framework for the derivation of time-uniform concentration inequalities for various classes of martingales. This framework unifies and strengthens a plethora of results from the exponential concentration literature and provides a toolbox for developing sequential and adaptive statistical procedures. The remaining three chapters develop such procedures.Chapter 2 builds upon the techniques of Chapter 1 to develop uniform concentration bounds which are somewhat more analytically and computationally complex but are much more useful for statistical applications. We frame these methods in terms of confidence sequences, that is, sequences of confidence intervals that are uniformly valid over an unbounded time horizon. One of the key results of this work is an empirical-Bernstein confidence sequence which provides a time-uniform, nonparametric, and non-asymptotic analogue of the t-test applicable to any distribution with bounded support. We explore applications to sequential estimation of average treatment effects in a randomized experiment, our first example above, as well as sequential estimation of a covariance matrix.Chapter 3 applies ideas from Chapters 1 and 2 to develop methods for the two related problems of estimating quantiles and estimating the entire cumulative distribution function, based on i.i.d. samples. We present confidence sequences for these estimands which are valid uniformly over time for any distribution, and we explore applications to A/B testing and best-arm identification when objectives are based on quantiles rather than means. Finally, Chapter 4 explores an application of uniform martingale concentration to the second example given above, the adaptive choice of subgroup within the analysis of an observational study. We introduce Rosenbaum's sensitivity analysis framework for observational studies, and show how our procedure yields qualitative improvements over existing methods within this framework.The martingale-based inferential methods we explore in this work trace their origins to Abraham Wald's work on the sequential probability ratio test during the 1940s, as well as to pioneering extensions developed in the late 1960s and early 1970s by Herbert Robbins, Donald Darling, David Siegmund, and Tze Leung Lai, not to mention many others. However, despite the decades of relevant literature, we believe most of the potential of the core ideas has yet to be realized. The key to unlocking this potential, we hope, is a fuller understanding of the nonparametric applicability of these methods, a detailed study of their implementation and tuning in practice, and an exploration of their utility beyond the sequential setting. While we propose several procedures that have immediate practical utility, we hope the larger contribution of the work will be as a first step towards a deeper appreciation of the power of martingale-based methods for adaptive inference, and ultimately to the development of a new class of statistical procedures which permit the kinds of adaptivity contemporary data analysts desire
Empirical processes for recurrent and transient random walks in random scenery
In this paper, we are interested in the asymptotic behaviour of the sequence
of processes with \begin{equation*}
W_n(s,t):=\sum_{k=1}^{\lfloor nt\rfloor}\big(1_{\{\xi_{S_k}\leq s\}}-s\big)
\end{equation*} where is a sequence of independent
random variables uniformly distributed on and
is a random walk evolving in , independent of the 's. In
Wendler (2016), the case where is a recurrent random
walk in such that converges in
distribution to a stable distribution of index , with ,
has been investigated. Here, we consider the cases where is either: a) a transient random walk in , b) a recurrent
random walk in such that
converges in distribution to a stable distribution of index
- …