24 research outputs found
Jensen-Shannon Information Based Characterization of the Generalization Error of Learning Algorithms
Generalization error bounds are critical to understanding the performance of
machine learning models. In this work, we propose a new information-theoretic
based generalization error upper bound applicable to supervised learning
scenarios. We show that our general bound can specialize in various previous
bounds. We also show that our general bound can be specialized under some
conditions to a new bound involving the Jensen-Shannon information between a
random variable modelling the set of training samples and another random
variable modelling the hypothesis. We also prove that our bound can be tighter
than mutual information-based bounds under some conditions.Comment: Accepted in ITW 2020 conferenc
Tighter Expected Generalization Error Bounds via Convexity of Information Measures
Generalization error bounds are essential to understanding machine learning algorithms. This paper presents novel expected generalization error upper bounds based on the average joint distribution between the output hypothesis and each input training sample. Multiple generalization error upper bounds based on different information measures are provided, including Wasserstein distance, total variation distance, KL divergence, and Jensen-Shannon divergence. Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature. An example is provided to demonstrate the tightness of the proposed generalization error bounds
Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms
Generalization error bounds are critical to understanding the performance of
machine learning models. In this work, building upon a new bound of the
expected value of an arbitrary function of the population and empirical risk of
a learning algorithm, we offer a more refined analysis of the generalization
behaviour of a machine learning models based on a characterization of (bounds)
to their generalization error moments. We discuss how the proposed bounds --
which also encompass new bounds to the expected generalization error -- relate
to existing bounds in the literature. We also discuss how the proposed
generalization error moment bounds can be used to construct new generalization
error high-probability bounds