392 research outputs found

    The DPG-star method

    Get PDF
    This article introduces the DPG-star (from now on, denoted DPG^*) finite element method. It is a method that is in some sense dual to the discontinuous Petrov-Galerkin (DPG) method. The DPG methodology can be viewed as a means to solve an overdetermined discretization of a boundary value problem. In the same vein, the DPG^* methodology is a means to solve an underdetermined discretization. These two viewpoints are developed by embedding the same operator equation into two different saddle-point problems. The analyses of the two problems have many common elements. Comparison to other methods in the literature round out the newly garnered perspective. Notably, DPG^* and DPG methods can be seen as generalizations of LL\mathcal{L}\mathcal{L}^\ast and least-squares methods, respectively. A priori error analysis and a posteriori error control for the DPG^* method are considered in detail. Reports of several numerical experiments are provided which demonstrate the essential features of the new method. A notable difference between the results from the DPG^* and DPG analyses is that the convergence rates of the former are limited by the regularity of an extraneous Lagrange multiplier variable

    Natural preconditioners for saddle point systems

    Get PDF
    The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or discrete setting, so saddle point systems arising from discretization of partial differential equation problems such as those describing electromagnetic problems or incompressible flow lead to equations with this structure as does, for example, the widely used sequential quadratic programming approach to nonlinear optimization.\ud This article concerns iterative solution methods for these problems and in particular shows how the problem formulation leads to natural preconditioners which guarantee rapid convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness -- in terms of rapidity of convergence -- is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends

    Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs

    Get PDF
    This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic PDE. We address the (difficult) case that the state equation depends on a countable number of parameters i.e., on σj\sigma_j with jNj\in\N, and that the PDE operator may depend non-affinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. Building on recent results in [CDS1, CDS2], we show that the state and the control are analytic as functions depending on these parameters σj\sigma_j. We establish sparsity of generalized polynomial chaos (gpc) expansions of both, state and control, in terms of the stochastic coordinate sequence σ=(σj)j1\sigma = (\sigma_j)_{j\ge 1} of the random inputs, and prove convergence rates of best NN-term truncations of these expansions. Such truncations are the key for subsequent computations since they do {\em not} assume that the stochastic input data has a finite expansion. In the follow-up paper [KS2], we explain two methods how such best NN-term truncations can practically be computed, by greedy-type algorithms as in [SG, Gi1], or by multilevel Monte-Carlo methods as in [KSS]. The sparsity result allows in conjunction with adaptive wavelet Galerkin schemes for sparse, adaptive tensor discretizations of control problems constrained by linear elliptic and parabolic PDEs developed in [DK, GK, K], see [KS2]

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
    corecore