7 research outputs found
Recommended from our members
Learning Theory and Approximation
The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning
Geometric Numerical Integration of the Assignment Flow
The assignment flow is a smooth dynamical system that evolves on an
elementary statistical manifold and performs contextual data labeling on a
graph. We derive and introduce the linear assignment flow that evolves
nonlinearly on the manifold, but is governed by a linear ODE on the tangent
space. Various numerical schemes adapted to the mathematical structure of these
two models are designed and studied, for the geometric numerical integration of
both flows: embedded Runge-Kutta-Munthe-Kaas schemes for the nonlinear flow,
adaptive Runge-Kutta schemes and exponential integrators for the linear flow.
All algorithms are parameter free, except for setting a tolerance value that
specifies adaptive step size selection by monitoring the local integration
error, or fixing the dimension of the Krylov subspace approximation. These
algorithms provide a basis for applying the assignment flow to machine learning
scenarios beyond supervised labeling, including unsupervised labeling and
learning from controlled assignment flows