4,910 research outputs found
Semiparametric theory
In this paper we give a brief review of semiparametric theory, using as a
running example the common problem of estimating an average causal effect.
Semiparametric models allow at least part of the data-generating process to be
unspecified and unrestricted, and can often yield robust estimators that
nonetheless behave similarly to those based on parametric likelihood
assumptions, e.g., fast rates of convergence to normal limiting distributions.
We discuss the basics of semiparametric theory, focusing on influence
functions.Comment: arXiv admin note: text overlap with arXiv:1510.0474
Donsker theorems for diffusions: Necessary and sufficient conditions
We consider the empirical process G_t of a one-dimensional diffusion with
finite speed measure, indexed by a collection of functions F. By the central
limit theorem for diffusions, the finite-dimensional distributions of G_t
converge weakly to those of a zero-mean Gaussian random process G. We prove
that the weak convergence G_t\Rightarrow G takes place in \ell^{\infty}(F) if
and only if the limit G exists as a tight, Borel measurable map. The proof
relies on majorizing measure techniques for continuous martingales.
Applications include the weak convergence of the local time density estimator
and the empirical distribution function on the full state space.Comment: Published at http://dx.doi.org/10.1214/009117905000000152 in the
Annals of Probability (http://www.imstat.org/aop/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Convergence rates of posterior distributions for noniid observations
We consider the asymptotic behavior of posterior distributions and Bayes
estimators based on observations which are required to be neither independent
nor identically distributed. We give general results on the rate of convergence
of the posterior measure relative to distances derived from a testing
criterion. We then specialize our results to independent, nonidentically
distributed observations, Markov processes, stationary Gaussian time series and
the white noise model. We apply our general results to several examples of
infinite-dimensional statistical models including nonparametric regression with
normal errors, binary regression, Poisson regression, an interval censoring
model, Whittle estimation of the spectral density of a time series and a
nonlinear autoregressive model.Comment: Published at http://dx.doi.org/10.1214/009053606000001172 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Posterior convergence rates of Dirichlet mixtures at smooth densities
We study the rates of convergence of the posterior distribution for Bayesian
density estimation with Dirichlet mixtures of normal distributions as the
prior. The true density is assumed to be twice continuously differentiable. The
bandwidth is given a sequence of priors which is obtained by scaling a single
prior by an appropriate order. In order to handle this problem, we derive a new
general rate theorem by considering a countable covering of the parameter space
whose prior probabilities satisfy a summability condition together with certain
individual bounds on the Hellinger metric entropy. We apply this new general
theorem on posterior convergence rates by computing bounds for Hellinger
(bracketing) entropy numbers for the involved class of densities, the error in
the approximation of a smooth density by normal mixtures and the concentration
rate of the prior. The best obtainable rate of convergence of the posterior
turns out to be equivalent to the well-known frequentist rate for integrated
mean squared error up to a logarithmic factor.Comment: Published at http://dx.doi.org/10.1214/009053606000001271 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive nonparametric confidence sets
We construct honest confidence regions for a Hilbert space-valued parameter
in various statistical models. The confidence sets can be centered at arbitrary
adaptive estimators, and have diameter which adapts optimally to a given
selection of models. The latter adaptation is necessarily limited in scope. We
review the notion of adaptive confidence regions, and relate the optimal rates
of the diameter of adaptive confidence regions to the minimax rates for testing
and estimation. Applications include the finite normal mean model, the white
noise model, density estimation and regression with random design.Comment: Published at http://dx.doi.org/10.1214/009053605000000877 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive posterior contraction rates for the horseshoe
We investigate the frequentist properties of Bayesian procedures for
estimation based on the horseshoe prior in the sparse multivariate normal means
model. Previous theoretical results assumed that the sparsity level, that is,
the number of signals, was known. We drop this assumption and characterize the
behavior of the maximum marginal likelihood estimator (MMLE) of a key parameter
of the horseshoe prior. We prove that the MMLE is an effective estimator of the
sparsity level, in the sense that it leads to (near) minimax optimal estimation
of the underlying mean vector generating the data. Besides this empirical Bayes
procedure, we consider the hierarchical Bayes method of putting a prior on the
unknown sparsity level as well. We show that both Bayesian techniques lead to
rate-adaptive optimal posterior contraction, which implies that the horseshoe
posterior is a good candidate for generating rate-adaptive credible sets.Comment: arXiv admin note: substantial text overlap with arXiv:1607.0189
- …