3,768 research outputs found

    Schwarz Iterative Methods: Infinite Space Splittings

    Get PDF
    We prove the convergence of greedy and randomized versions of Schwarz iterative methods for solving linear elliptic variational problems based on infinite space splittings of a Hilbert space. For the greedy case, we show a squared error decay rate of O((m+1)−1)O((m+1)^{-1}) for elements of an approximation space A1\mathcal{A}_1 related to the underlying splitting. For the randomized case, we show an expected squared error decay rate of O((m+1)−1)O((m+1)^{-1}) on a class A∞π⊂A1\mathcal{A}_{\infty}^{\pi}\subset \mathcal{A}_1 depending on the probability distribution.Comment: Revised version, accepted in Constr. Appro

    Stochastic subspace correction methods and fault tolerance

    Get PDF
    We present convergence results in expectation for stochastic subspace correction schemes and their accelerated versions to solve symmetric positive-definite variational problems, and discuss their potential for achieving fault tolerance in an unreliable compute network. We employ the standard overlapping domain decomposition algorithm for PDE discretizations to discuss the latter aspect.Comment: 33 pages, 6 figure

    Stochastic subspace correction in Hilbert space

    Get PDF
    We consider an incremental approximation method for solving variational problems in infinite-dimensional Hilbert spaces, where in each step a randomly and independently selected subproblem from an infinite collection of subproblems is solved. we show that convergence rates for the expectation of the squared error can be guaranteed under weaker conditions than previously established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page

    Optimally rotated coordinate systems for adaptive least-squares regression on sparse grids

    Get PDF
    For low-dimensional data sets with a large amount of data points, standard kernel methods are usually not feasible for regression anymore. Besides simple linear models or involved heuristic deep learning models, grid-based discretizations of larger (kernel) model classes lead to algorithms, which naturally scale linearly in the amount of data points. For moderate-dimensional or high-dimensional regression tasks, these grid-based discretizations suffer from the curse of dimensionality. Here, sparse grid methods have proven to circumvent this problem to a large extent. In this context, space- and dimension-adaptive sparse grids, which can detect and exploit a given low effective dimensionality of nominally high-dimensional data, are particularly successful. They nevertheless rely on an axis-aligned structure of the solution and exhibit issues for data with predominantly skewed and rotated coordinates. In this paper we propose a preprocessing approach for these adaptive sparse grid algorithms that determines an optimized, problem-dependent coordinate system and, thus, reduces the effective dimensionality of a given data set in the ANOVA sense. We provide numerical examples on synthetic data as well as real-world data to show how an adaptive sparse grid least squares algorithm benefits from our preprocessing method

    Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations

    Full text link
    In this work, we apply stochastic collocation methods with radial kernel basis functions for an uncertainty quantification of the random incompressible two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase Navier-Stokes equation for each given realization. We are able to empirically show that the resulting kernel-based stochastic collocation is highly competitive in this setting and even outperforms some other standard methods

    A representer theorem for deep kernel learning

    Get PDF
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods

    Entwurf eines Tempussystems des Deutschen : (am Beispiel des Sprachunterrichts Deutsch für ausländische Studierende an deutschen Hochschulen)

    Get PDF
    Ausländische Studierende an deutschen Hochschulen haben mit dem traditionellen deutschen Tempussystem nach lateinischem Vorbild eine Reihe von Problemen, weil es nicht immer logisch nachvollziehbare Beziehungen zwischen einer grammatischen Tempusform und den Zeitbedeutungen gibt. Nach einer überblicksartigen kritischen Betrachtung der Darstellung des Tempussystems in einigen einschlägigen (Übungs-) Grammatiken und Lehrwerken stellt der Verfasser den Entwurf eines Tempussystems des Deutschen vor, bei dem die klassische Einteilung in 6 Tempusformen zugunsten eines nutzerfreundlicheren Tempussystems aufgegeben wird. Dann werden exemplarisch typischen kommunikativen Aufgaben von Studierenden Tempusformen in Form von Gebrauchsvorschriften, -präferenzen bzw. -möglichkeiten zugeordnet
    • …
    corecore