13,957 research outputs found
Global versus Localized Generative Adversarial Nets
In this paper, we present a novel localized Generative Adversarial Net (GAN)
to learn on the manifold of real data. Compared with the classic GAN that {\em
globally} parameterizes a manifold, the Localized GAN (LGAN) uses local
coordinate charts to parameterize distinct local geometry of how data points
can transform at different locations on the manifold. Specifically, around each
point there exists a {\em local} generator that can produce data following
diverse patterns of transformations on the manifold. The locality nature of
LGAN enables local generators to adapt to and directly access the local
geometry without need to invert the generator in a global GAN. Furthermore, it
can prevent the manifold from being locally collapsed to a dimensionally
deficient tangent subspace by imposing an orthonormality prior between
tangents. This provides a geometric approach to alleviating mode collapse at
least locally on the manifold by imposing independence between data
transformations in different tangent directions. We will also demonstrate the
LGAN can be applied to train a robust classifier that prefers locally
consistent classification decisions on the manifold, and the resultant
regularizer is closely related with the Laplace-Beltrami operator. Our
experiments show that the proposed LGANs can not only produce diverse image
transformations, but also deliver superior classification performances
Geometric Numerical Integration of the Assignment Flow
The assignment flow is a smooth dynamical system that evolves on an
elementary statistical manifold and performs contextual data labeling on a
graph. We derive and introduce the linear assignment flow that evolves
nonlinearly on the manifold, but is governed by a linear ODE on the tangent
space. Various numerical schemes adapted to the mathematical structure of these
two models are designed and studied, for the geometric numerical integration of
both flows: embedded Runge-Kutta-Munthe-Kaas schemes for the nonlinear flow,
adaptive Runge-Kutta schemes and exponential integrators for the linear flow.
All algorithms are parameter free, except for setting a tolerance value that
specifies adaptive step size selection by monitoring the local integration
error, or fixing the dimension of the Krylov subspace approximation. These
algorithms provide a basis for applying the assignment flow to machine learning
scenarios beyond supervised labeling, including unsupervised labeling and
learning from controlled assignment flows
Differential geometric regularization for supervised learning of classifiers
We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to an estimator of the class probability P(y|\vec x). The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification.http://proceedings.mlr.press/v48/baia16.pdfPublished versio
- …