1,877 research outputs found
Shaping the learning landscape in neural networks around wide flat minima
Learning in Deep Neural Networks (DNN) takes place by minimizing a non-convex
high-dimensional loss function, typically by a stochastic gradient descent
(SGD) strategy. The learning process is observed to be able to find good
minimizers without getting stuck in local critical points, and that such
minimizers are often satisfactory at avoiding overfitting. How these two
features can be kept under control in nonlinear devices composed of millions of
tunable connections is a profound and far reaching open question. In this paper
we study basic non-convex one- and two-layer neural network models which learn
random patterns, and derive a number of basic geometrical and algorithmic
features which suggest some answers. We first show that the error loss function
presents few extremely wide flat minima (WFM) which coexist with narrower
minima and critical points. We then show that the minimizers of the
cross-entropy loss function overlap with the WFM of the error loss. We also
show examples of learning devices for which WFM do not exist. From the
algorithmic perspective we derive entropy driven greedy and message passing
algorithms which focus their search on wide flat regions of minimizers. In the
case of SGD and cross-entropy loss, we show that a slow reduction of the norm
of the weights along the learning process also leads to WFM. We corroborate the
results by a numerical study of the correlations between the volumes of the
minimizers, their Hessian and their generalization performance on real data.Comment: 37 pages (16 main text), 10 figures (7 main text
Entropic gradient descent algorithms and wide flat minima
The properties of flat minima in the empirical risk landscape of neural
networks have been debated for some time. Increasing evidence suggests they
possess better generalization capabilities with respect to sharp ones. First,
we discuss Gaussian mixture classification models and show analytically that
there exist Bayes optimal pointwise estimators which correspond to minimizers
belonging to wide flat regions. These estimators can be found by applying
maximum flatness algorithms either directly on the classifier (which is norm
independent) or on the differentiable loss function used in learning. Next, we
extend the analysis to the deep learning scenario by extensive numerical
validations. Using two algorithms, Entropy-SGD and Replicated-SGD, that
explicitly include in the optimization objective a non-local flatness measure
known as local entropy, we consistently improve the generalization error for
common architectures (e.g. ResNet, EfficientNet). An easy to compute flatness
measure shows a clear correlation with test accuracy.Comment: updated version focusing on numerical experiment
Learning Generative Models with Sinkhorn Divergences
The ability to compare two degenerate probability distributions (i.e. two
probability distributions supported on two distinct low-dimensional manifolds
living in a much higher-dimensional space) is a crucial problem arising in the
estimation of generative models for high-dimensional observations such as those
arising in computer vision or natural language. It is known that optimal
transport metrics can represent a cure for this problem, since they were
specifically designed as an alternative to information divergences to handle
such problematic scenarios. Unfortunately, training generative machines using
OT raises formidable computational and statistical challenges, because of (i)
the computational burden of evaluating OT losses, (ii) the instability and lack
of smoothness of these losses, (iii) the difficulty to estimate robustly these
losses and their gradients in high dimension. This paper presents the first
tractable computational method to train large scale generative models using an
optimal transport loss, and tackles these three issues by relying on two key
ideas: (a) entropic smoothing, which turns the original OT loss into one that
can be computed using Sinkhorn fixed point iterations; (b) algorithmic
(automatic) differentiation of these iterations. These two approximations
result in a robust and differentiable approximation of the OT loss with
streamlined GPU execution. Entropic smoothing generates a family of losses
interpolating between Wasserstein (OT) and Maximum Mean Discrepancy (MMD), thus
allowing to find a sweet spot leveraging the geometry of OT and the favorable
high-dimensional sample complexity of MMD which comes with unbiased gradient
estimates. The resulting computational architecture complements nicely standard
deep network generative models by a stack of extra layers implementing the loss
function
Bregman Voronoi Diagrams: Properties, Algorithms and Applications
The Voronoi diagram of a finite set of objects is a fundamental geometric
structure that subdivides the embedding space into regions, each region
consisting of the points that are closer to a given object than to the others.
We may define many variants of Voronoi diagrams depending on the class of
objects, the distance functions and the embedding space. In this paper, we
investigate a framework for defining and building Voronoi diagrams for a broad
class of distance functions called Bregman divergences. Bregman divergences
include not only the traditional (squared) Euclidean distance but also various
divergence measures based on entropic functions. Accordingly, Bregman Voronoi
diagrams allow to define information-theoretic Voronoi diagrams in statistical
parametric spaces based on the relative entropy of distributions. We define
several types of Bregman diagrams, establish correspondences between those
diagrams (using the Legendre transformation), and show how to compute them
efficiently. We also introduce extensions of these diagrams, e.g. k-order and
k-bag Bregman Voronoi diagrams, and introduce Bregman triangulations of a set
of points and their connexion with Bregman Voronoi diagrams. We show that these
triangulations capture many of the properties of the celebrated Delaunay
triangulation. Finally, we give some applications of Bregman Voronoi diagrams
which are of interest in the context of computational geometry and machine
learning.Comment: Extend the proceedings abstract of SODA 2007 (46 pages, 15 figures
Categorization of interestingness measures for knowledge extraction
Finding interesting association rules is an important and active research
field in data mining. The algorithms of the Apriori family are based on two
rule extraction measures, support and confidence. Although these two measures
have the virtue of being algorithmically fast, they generate a prohibitive
number of rules most of which are redundant and irrelevant. It is therefore
necessary to use further measures which filter uninteresting rules. Many
synthesis studies were then realized on the interestingness measures according
to several points of view. Different reported studies have been carried out to
identify "good" properties of rule extraction measures and these properties
have been assessed on 61 measures. The purpose of this paper is twofold. First
to extend the number of the measures and properties to be studied, in addition
to the formalization of the properties proposed in the literature. Second, in
the light of this formal study, to categorize the studied measures. This paper
leads then to identify categories of measures in order to help the users to
efficiently select an appropriate measure by choosing one or more measure(s)
during the knowledge extraction process. The properties evaluation on the 61
measures has enabled us to identify 7 classes of measures, classes that we
obtained using two different clustering techniques.Comment: 34 pages, 4 figure
Classification and Verification of Online Handwritten Signatures with Time Causal Information Theory Quantifiers
We present a new approach for online handwritten signature classification and
verification based on descriptors stemming from Information Theory. The
proposal uses the Shannon Entropy, the Statistical Complexity, and the Fisher
Information evaluated over the Bandt and Pompe symbolization of the horizontal
and vertical coordinates of signatures. These six features are easy and fast to
compute, and they are the input to an One-Class Support Vector Machine
classifier. The results produced surpass state-of-the-art techniques that
employ higher-dimensional feature spaces which often require specialized
software and hardware. We assess the consistency of our proposal with respect
to the size of the training sample, and we also use it to classify the
signatures into meaningful groups.Comment: Submitted to PLOS On
Financial markets: very noisy information processing
We report new results about the impact of noise on information processing with application to financial markets. These results quantify the tradeoff between the amount of data and the noise level in the data. They also provide estimates for the performance of a learning system in terms of the noise level. We use these results to derive a method for detecting the change in market volatility from period to period. We successfully apply these results to the four major foreign exchange (FX) markets. The results hold for linear as well as nonlinear learning models and algorithms and for different noise models
- …