4,530 research outputs found
Bayesian adaptation
In the need for low assumption inferential methods in infinite-dimensional
settings, Bayesian adaptive estimation via a prior distribution that does not
depend on the regularity of the function to be estimated nor on the sample size
is valuable. We elucidate relationships among the main approaches followed to
design priors for minimax-optimal rate-adaptive estimation meanwhile shedding
light on the underlying ideas.Comment: 20 pages, Propositions 3 and 5 adde
Bayes and empirical Bayes: do they merge?
Bayesian inference is attractive for its coherence and good frequentist
properties. However, it is a common experience that eliciting a honest prior
may be difficult and, in practice, people often take an {\em empirical Bayes}
approach, plugging empirical estimates of the prior hyperparameters into the
posterior distribution. Even if not rigorously justified, the underlying idea
is that, when the sample size is large, empirical Bayes leads to "similar"
inferential answers. Yet, precise mathematical results seem to be missing. In
this work, we give a more rigorous justification in terms of merging of Bayes
and empirical Bayes posterior distributions. We consider two notions of
merging: Bayesian weak merging and frequentist merging in total variation.
Since weak merging is related to consistency, we provide sufficient conditions
for consistency of empirical Bayes posteriors. Also, we show that, under
regularity conditions, the empirical Bayes procedure asymptotically selects the
value of the hyperparameter for which the prior mostly favors the "truth".
Examples include empirical Bayes density estimation with Dirichlet process
mixtures.Comment: 27 page
Posterior concentration rates for empirical Bayes procedures, with applications to Dirichlet Process mixtures
In this paper we provide general conditions to check on the model and the
prior to derive posterior concentration rates for data-dependent priors (or
empirical Bayes approaches). We aim at providing conditions that are close to
the conditions provided in the seminal paper by Ghosal and van der Vaart
(2007a). We then apply the general theorem to two different settings: the
estimation of a density using Dirichlet process mixtures of Gaussian random
variables with base measure depending on some empirical quantities and the
estimation of the intensity of a counting process under the Aalen model. A
simulation study for inhomogeneous Poisson processes also illustrates our
results. In the former case we also derive some results on the estimation of
the mixing density and on the deconvolution problem. In the latter, we provide
a general theorem on posterior concentration rates for counting processes with
Aalen multiplicative intensity with priors not depending on the data.Comment: With supplementary materia
On nonparametric estimation of a mixing density via the predictive recursion algorithm
Nonparametric estimation of a mixing density based on observations from the
corresponding mixture is a challenging statistical problem. This paper surveys
the literature on a fast, recursive estimator based on the predictive recursion
algorithm. After introducing the algorithm and giving a few examples, I
summarize the available asymptotic convergence theory, describe an important
semiparametric extension, and highlight two interesting applications. I
conclude with a discussion of several recent developments in this area and some
open problems.Comment: 22 pages, 5 figures. Comments welcome at
https://www.researchers.one/article/2018-12-
Sparse covariance estimation in heterogeneous samples
Standard Gaussian graphical models (GGMs) implicitly assume that the
conditional independence among variables is common to all observations in the
sample. However, in practice, observations are usually collected form
heterogeneous populations where such assumption is not satisfied, leading in
turn to nonlinear relationships among variables. To tackle these problems we
explore mixtures of GGMs; in particular, we consider both infinite mixture
models of GGMs and infinite hidden Markov models with GGM emission
distributions. Such models allow us to divide a heterogeneous population into
homogenous groups, with each cluster having its own conditional independence
structure. The main advantage of considering infinite mixtures is that they
allow us easily to estimate the number of number of subpopulations in the
sample. As an illustration, we study the trends in exchange rate fluctuations
in the pre-Euro era. This example demonstrates that the models are very
flexible while providing extremely interesting interesting insights into
real-life applications
- …