1,188 research outputs found

    Schwarz Iterative Methods: Infinite Space Splittings

    Full text link
    We prove the convergence of greedy and randomized versions of Schwarz iterative methods for solving linear elliptic variational problems based on infinite space splittings of a Hilbert space. For the greedy case, we show a squared error decay rate of O((m+1)−1)O((m+1)^{-1}) for elements of an approximation space A1\mathcal{A}_1 related to the underlying splitting. For the randomized case, we show an expected squared error decay rate of O((m+1)−1)O((m+1)^{-1}) on a class A∞π⊂A1\mathcal{A}_{\infty}^{\pi}\subset \mathcal{A}_1 depending on the probability distribution.Comment: Revised version, accepted in Constr. Appro

    An L1 Penalty Method for General Obstacle Problems

    Get PDF
    We construct an efficient numerical scheme for solving obstacle problems in divergence form. The numerical method is based on a reformulation of the obstacle in terms of an L1-like penalty on the variational problem. The reformulation is an exact regularizer in the sense that for large (but finite) penalty parameter, we recover the exact solution. Our formulation is applied to classical elliptic obstacle problems as well as some related free boundary problems, for example the two-phase membrane problem and the Hele-Shaw model. One advantage of the proposed method is that the free boundary inherent in the obstacle problem arises naturally in our energy minimization without any need for problem specific or complicated discretization. In addition, our scheme also works for nonlinear variational inequalities arising from convex minimization problems.Comment: 20 pages, 18 figure

    Stochastic subspace correction in Hilbert space

    Full text link
    We consider an incremental approximation method for solving variational problems in infinite-dimensional Hilbert spaces, where in each step a randomly and independently selected subproblem from an infinite collection of subproblems is solved. we show that convergence rates for the expectation of the squared error can be guaranteed under weaker conditions than previously established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page

    Approximation results and subspace correction algorithms for implicit variational inequalities

    No full text
    International audienceThis paper deals with the mathematical analysis and the subspace approximation of a system of variational inequalities representing a unified approach to several quasistatic contact problems in elasticity. Using an implicit time discretization scheme and some estimates, convergence properties of the incremental solutions and existence results are presented for a class of abstract implicit evolution variational inequalities involving a nonlinear operator. To solve the corresponding semi-discrete and the fully discrete problems, some general subspace correction algorithms are proposed, for which global convergence is analyzed and error estimates are established

    Internal and subspace correction approximations of implicit variational inequalities

    No full text
    International audienceThe aim of this paper is to study the existence of solutions and some approximations for a class of implicit evolution variational inequalities that represents a generalization of several quasistatic contact problems in elasticity. Using appropriate estimates for the incremental solutions, the existence of a continuous solution and convergence results are proved for some corresponding internal approximation and backward difference scheme. To solve the fully discrete problems, general additive subspace correction algorithms are considered, for which global convergence is proved and some error estimates are established

    Exit time asymptotics for small noise stochastic delay differential equations

    Full text link
    Dynamical system models with delayed dynamics and small noise arise in a variety of applications in science and engineering. In many applications, stable equilibrium or periodic behavior is critical to a well functioning system. Sufficient conditions for the stability of equilibrium points or periodic orbits of certain deterministic dynamical systems with delayed dynamics are known and it is of interest to understand the sample path behavior of such systems under the addition of small noise. We consider a small noise stochastic delay differential equation (SDDE) with coefficients that depend on the history of the process over a finite delay interval. We obtain asymptotic estimates, as the noise vanishes, on the time it takes a solution of the stochastic equation to exit a bounded domain that is attracted to a stable equilibrium point or periodic orbit of the corresponding deterministic equation. To obtain these asymptotics, we prove a sample path large deviation principle (LDP) for the SDDE that is uniform over initial conditions in bounded sets. The proof of the uniform sample path LDP uses a variational representation for exponential functionals of strong solutions of the SDDE. We anticipate that the overall approach may be useful in proving uniform sample path LDPs for a broad class of infinite-dimensional small noise stochastic equations.Comment: 39 page

    Analysis of Schwarz methods for a hybridizable discontinuous Galerkin discretization

    Full text link
    Schwarz methods are attractive parallel solvers for large scale linear systems obtained when partial differential equations are discretized. For hybridizable discontinuous Galerkin (HDG) methods, this is a relatively new field of research, because HDG methods impose continuity across elements using a Robin condition, while classical Schwarz solvers use Dirichlet transmission conditions. Robin conditions are used in optimized Schwarz methods to get faster convergence compared to classical Schwarz methods, and this even without overlap, when the Robin parameter is well chosen. We present in this paper a rigorous convergence analysis of Schwarz methods for the concrete case of hybridizable interior penalty (IPH) method. We show that the penalization parameter needed for convergence of IPH leads to slow convergence of the classical additive Schwarz method, and propose a modified solver which leads to much faster convergence. Our analysis is entirely at the discrete level, and thus holds for arbitrary interfaces between two subdomains. We then generalize the method to the case of many subdomains, including cross points, and obtain a new class of preconditioners for Krylov subspace methods which exhibit better convergence properties than the classical additive Schwarz preconditioner. We illustrate our results with numerical experiments.Comment: 25 pages, 5 figures, 3 tables, accepted for publication in SINU
    • …
    corecore