3,673 research outputs found
Schwarz Iterative Methods: Infinite Space Splittings
We prove the convergence of greedy and randomized versions of Schwarz
iterative methods for solving linear elliptic variational problems based on
infinite space splittings of a Hilbert space. For the greedy case, we show a
squared error decay rate of for elements of an approximation
space related to the underlying splitting. For the randomized
case, we show an expected squared error decay rate of on a
class depending on the
probability distribution.Comment: Revised version, accepted in Constr. Appro
Stochastic subspace correction in Hilbert space
We consider an incremental approximation method for solving variational
problems in infinite-dimensional Hilbert spaces, where in each step a randomly
and independently selected subproblem from an infinite collection of
subproblems is solved. we show that convergence rates for the expectation of
the squared error can be guaranteed under weaker conditions than previously
established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the
theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page
Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations
In this work, we apply stochastic collocation methods with radial kernel
basis functions for an uncertainty quantification of the random incompressible
two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the
existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase
Navier-Stokes equation for each given realization. We are able to empirically
show that the resulting kernel-based stochastic collocation is highly
competitive in this setting and even outperforms some other standard methods
A representer theorem for deep kernel learning
In this paper we provide a finite-sample and an infinite-sample representer
theorem for the concatenation of (linear combinations of) kernel functions of
reproducing kernel Hilbert spaces. These results serve as mathematical
foundation for the analysis of machine learning algorithms based on
compositions of functions. As a direct consequence in the finite-sample case,
the corresponding infinite-dimensional minimization problems can be recast into
(nonlinear) finite-dimensional minimization problems, which can be tackled with
nonlinear optimization algorithms. Moreover, we show how concatenated machine
learning problems can be reformulated as neural networks and how our
representer theorem applies to a broad class of state-of-the-art deep learning
methods
Entwurf eines Tempussystems des Deutschen : (am Beispiel des Sprachunterrichts Deutsch für ausländische Studierende an deutschen Hochschulen)
Ausländische Studierende an deutschen Hochschulen haben mit dem traditionellen deutschen Tempussystem nach lateinischem Vorbild eine Reihe von Problemen, weil es nicht immer logisch nachvollziehbare Beziehungen zwischen einer grammatischen Tempusform und den Zeitbedeutungen gibt. Nach einer überblicksartigen kritischen Betrachtung der Darstellung des Tempussystems in einigen einschlägigen (Übungs-) Grammatiken und Lehrwerken stellt der Verfasser den Entwurf eines Tempussystems des Deutschen vor, bei dem die klassische Einteilung in 6 Tempusformen zugunsten eines nutzerfreundlicheren Tempussystems aufgegeben wird. Dann werden exemplarisch typischen kommunikativen Aufgaben von Studierenden Tempusformen in Form von Gebrauchsvorschriften, -präferenzen bzw. -möglichkeiten zugeordnet
- …