7 research outputs found
A representer theorem for deep kernel learning
In this paper we provide a finite-sample and an infinite-sample representer
theorem for the concatenation of (linear combinations of) kernel functions of
reproducing kernel Hilbert spaces. These results serve as mathematical
foundation for the analysis of machine learning algorithms based on
compositions of functions. As a direct consequence in the finite-sample case,
the corresponding infinite-dimensional minimization problems can be recast into
(nonlinear) finite-dimensional minimization problems, which can be tackled with
nonlinear optimization algorithms. Moreover, we show how concatenated machine
learning problems can be reformulated as neural networks and how our
representer theorem applies to a broad class of state-of-the-art deep learning
methods
Recommended from our members
Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations
A statistical learning approach for parametric PDEs related to
Uncertainty Quantification is derived. The method is based on the
minimization of an empirical risk on a selected model class and it is shown
to be applicable to a broad range of problems. A general unified convergence
analysis is derived, which takes into account the approximation and the
statistical errors. By this, a combination of theoretical results from
numerical analysis and statistics is obtained. Numerical experiments
illustrate the performance of the method with the model class of hierarchical
tensors
Recommended from our members
Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations
A statistical learning approach for parametric PDEs related to
Uncertainty Quantification is derived. The method is based on the
minimization of an empirical risk on a selected model class and it is shown
to be applicable to a broad range of problems. A general unified convergence
analysis is derived, which takes into account the approximation and the
statistical errors. By this, a combination of theoretical results from
numerical analysis and statistics is obtained. Numerical experiments
illustrate the performance of the method with the model class of hierarchical
tensors
Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations
A statistical learning approach for parametric PDEs related to
Uncertainty Quantification is derived. The method is based on the
minimization of an empirical risk on a selected model class and it is shown
to be applicable to a broad range of problems. A general unified convergence
analysis is derived, which takes into account the approximation and the
statistical errors. By this, a combination of theoretical results from
numerical analysis and statistics is obtained. Numerical experiments
illustrate the performance of the method with the model class of hierarchical
tensors
Recommended from our members
Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations
A statistical learning approach for parametric PDEs related to
Uncertainty Quantification is derived. The method is based on the
minimization of an empirical risk on a selected model class and it is shown
to be applicable to a broad range of problems. A general unified convergence
analysis is derived, which takes into account the approximation and the
statistical errors. By this, a combination of theoretical results from
numerical analysis and statistics is obtained. Numerical experiments
illustrate the performance of the method with the model class of hierarchical
tensors
Error analysis of regularized and unregularized least-squares regression on discretized function spaces
In this thesis, we analyze a variant of the least-squares regression method which operates on subsets of finite-dimensional vector spaces. In the first part, we focus on a regression problem which is constrained to a ball of finite radius in the search space. We derive an upper bound on the overall error by coupling the ball radius to the resolution of the search space. In the second part, the corresponding penalized Lagrangian dual problem is considered to establish probabilistic results on the well-posedness of the underlying minimization problem. Furthermore, we have a look at the limit case, where the penalty term vanishes and we improve on our error estimates from the first part for the special case of noiseless function reconstruction. Subsequently, our theoretical foundation is used to obtain novel convergence results for regression algorithms based on sparse grids with linear splines and Fourier polynomial spaces on hyperbolic crosses. We conclude the thesis by giving several numerical examples and comparing the observed error behavior to our theoretical results