240,030 research outputs found
Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable
In data science and machine learning, hierarchical parametric models, such as
mixture models, are often used. They contain two kinds of variables: observable
variables, which represent the parts of the data that can be directly measured,
and latent variables, which represent the underlying processes that generate
the data. Although there has been an increase in research on the estimation
accuracy for observable variables, the theoretical analysis of estimating
latent variables has not been thoroughly investigated. In a previous study, we
determined the accuracy of a Bayes estimation for the joint probability of the
latent variables in a dataset, and we proved that the Bayes method is
asymptotically more accurate than the maximum-likelihood method. However, the
accuracy of the Bayes estimation for a single latent variable remains unknown.
In the present paper, we derive the asymptotic expansions of the error
functions, which are defined by the Kullback-Leibler divergence, for two types
of single-variable estimations when the statistical regularity is satisfied.
Our results indicate that the accuracies of the Bayes and maximum-likelihood
methods are asymptotically equivalent and clarify that the Bayes method is only
advantageous for multivariable estimations.Comment: 28 pages, 3 figure
Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector
For the important classical problem of inference on a sparse high-dimensional
normal mean vector, we propose a novel empirical Bayes model that admits a
posterior distribution with desirable properties under mild conditions. In
particular, our empirical Bayes posterior distribution concentrates on balls,
centered at the true mean vector, with squared radius proportional to the
minimax rate, and its posterior mean is an asymptotically minimax estimator. We
also show that, asymptotically, the support of our empirical Bayes posterior
has roughly the same effective dimension as the true sparse mean vector.
Simulation from our empirical Bayes posterior is straightforward, and our
numerical results demonstrate the quality of our method compared to others
having similar large-sample properties.Comment: 18 pages, 3 figures, 3 table
Group invariant inferred distributions via noncommutative probability
One may consider three types of statistical inference: Bayesian, frequentist,
and group invariance-based. The focus here is on the last method. We consider
the Poisson and binomial distributions in detail to illustrate a group
invariance method for constructing inferred distributions on parameter spaces
given observed results. These inferred distributions are obtained without using
Bayes' method and in particular without using a joint distribution of random
variable and parameter. In the Poisson and binomial cases, the final formulas
for inferred distributions coincide with the formulas for Bayes posteriors with
uniform priors.Comment: Published at http://dx.doi.org/10.1214/074921706000000563 in the IMS
Lecture Notes--Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …
