3,137 research outputs found
Entropy Concentration and the Empirical Coding Game
We give a characterization of Maximum Entropy/Minimum Relative Entropy
inference by providing two `strong entropy concentration' theorems. These
theorems unify and generalize Jaynes' `concentration phenomenon' and Van
Campenhout and Cover's `conditional limit theorem'. The theorems characterize
exactly in what sense a prior distribution Q conditioned on a given constraint,
and the distribution P, minimizing the relative entropy D(P ||Q) over all
distributions satisfying the constraint, are `close' to each other. We then
apply our theorems to establish the relationship between entropy concentration
and a game-theoretic characterization of Maximum Entropy Inference due to
Topsoe and others.Comment: A somewhat modified version of this paper was published in Statistica
Neerlandica 62(3), pages 374-392, 200
On derivations with respect to finite sets of smooth functions
The purpose of this paper is to show that functions that derivate the
two-variable product function and one of the exponential, trigonometric or
hyperbolic functions are also standard derivations. The more general problem
considered is to describe finite sets of differentiable functions such that
derivations with respect to this set are automatically standard derivations
On the equality problem of generalized Bajraktarevi\'c means
The purpose of this paper is to investigate the equality problem of
generalized Bajraktarevi\'c means, i.e., to solve the functional equation
\begin{equation}\label{E0}\tag{*}
f^{(-1)}\bigg(\frac{p_1(x_1)f(x_1)+\dots+p_n(x_n)f(x_n)}{p_1(x_1)+\dots+p_n(x_n)}\bigg)=g^{(-1)}\bigg(\frac{q_1(x_1)g(x_1)+\dots+q_n(x_n)g(x_n)}{q_1(x_1)+\dots+q_n(x_n)}\bigg),
\end{equation} which holds for all , where ,
is a nonempty open real interval, the unknown functions
are strictly monotone, and denote
their generalized left inverses, respectively, and
and
are also unknown functions. This
equality problem in the symmetric two-variable (i.e., when ) case was
already investigated and solved under sixth-order regularity assumptions by
Losonczi in 1999. In the nonsymmetric two-variable case, assuming three times
differentiability of , and the existence of such that
either is twice continuously differentiable and is continuous
on , or is twice differentiable and is once differentiable
on , we prove that \eqref{E0} holds if and only if there exist four
constants with such that \begin{equation*}
cf+d>0,\qquad
g=\frac{af+b}{cf+d},\qquad\mbox{and}\qquad q_\ell=(cf+d)p_\ell\qquad
(\ell\in\{1,\dots,n\}). \end{equation*} In the case , we obtain the
same conclusion with weaker regularity assumptions. Namely, we suppose that
and are three times differentiable, is continuous and there exist
with such that are
differentiable
Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It
We empirically show that Bayesian inference can be inconsistent under
misspecification in simple linear regression problems, both in a model
averaging/selection and in a Bayesian ridge regression setting. We use the
standard linear model, which assumes homoskedasticity, whereas the data are
heteroskedastic, and observe that the posterior puts its mass on ever more
high-dimensional models as the sample size increases. To remedy the problem, we
equip the likelihood in Bayes' theorem with an exponent called the learning
rate, and we propose the Safe Bayesian method to learn the learning rate from
the data. SafeBayes tends to select small learning rates as soon the standard
posterior is not `cumulatively concentrated', and its results on our data are
quite encouraging.Comment: 70 pages, 20 figure
- …