The ongoing unprecedented exponential explosion of available computing power,
has radically transformed the methods of statistical inference. What used to be
a small minority of statisticians advocating for the use of priors and a strict
adherence to bayes theorem, it is now becoming the norm across disciplines. The
evolutionary direction is now clear. The trend is towards more realistic,
flexible and complex likelihoods characterized by an ever increasing number of
parameters. This makes the old question of: What should the prior be? to
acquire a new central importance in the modern bayesian theory of inference.
Entropic priors provide one answer to the problem of prior selection. The
general definition of an entropic prior has existed since 1988, but it was not
until 1998 that it was found that they provide a new notion of complete
ignorance. This paper re-introduces the family of entropic priors as minimizers
of mutual information between the data and the parameters, as in
[rodriguez98b], but with a small change and a correction. The general formalism
is then applied to two large classes of models: Discrete probabilistic networks
and univariate finite mixtures of gaussians. It is also shown how to perform
inference by efficiently sampling the corresponding posterior distributions.Comment: 24 pages, 3 figures, Presented at MaxEnt2001, APL Johns Hopkins
University, August 4-9 2001. See also http://omega.albany.edu:8008