8,421 research outputs found
The Role of Mutual Information in Variational Classifiers
Overfitting data is a well-known phenomenon related with the generation of a
model that mimics too closely (or exactly) a particular instance of data, and
may therefore fail to predict future observations reliably. In practice, this
behaviour is controlled by various--sometimes heuristics--regularization
techniques, which are motivated by developing upper bounds to the
generalization error. In this work, we study the generalization error of
classifiers relying on stochastic encodings trained on the cross-entropy loss,
which is often used in deep learning for classification problems. We derive
bounds to the generalization error showing that there exists a regime where the
generalization error is bounded by the mutual information between input
features and the corresponding representations in the latent space, which are
randomly generated according to the encoding distribution. Our bounds provide
an information-theoretic understanding of generalization in the so-called class
of variational classifiers, which are regularized by a Kullback-Leibler (KL)
divergence term. These results give theoretical grounds for the highly popular
KL term in variational inference methods that was already recognized to act
effectively as a regularization penalty. We further observe connections with
well studied notions such as Variational Autoencoders, Information Dropout,
Information Bottleneck and Boltzmann Machines. Finally, we perform numerical
experiments on MNIST and CIFAR datasets and show that mutual information is
indeed highly representative of the behaviour of the generalization error
Conditional Mutual Information Neural Estimator
Several recent works in communication systems have proposed to leverage the
power of neural networks in the design of encoders and decoders. In this
approach, these blocks can be tailored to maximize the transmission rate based
on aggregated samples from the channel. Motivated by the fact that, in many
communication schemes, the achievable transmission rate is determined by a
conditional mutual information term, this paper focuses on neural-based
estimators for this information-theoretic quantity. Our results are based on
variational bounds for the KL-divergence and, in contrast to some previous
works, we provide a mathematically rigorous lower bound. However, additional
challenges with respect to the unconditional mutual information emerge due to
the presence of a conditional density function which we address here.Comment: To be presented at ICASSP 202
- …