262 research outputs found

    Banach space projections and Petrov-Galerkin estimates

    Full text link
    We sharpen the classic a priori error estimate of Babuska for Petrov-Galerkin methods on a Banach space. In particular, we do so by (i) introducing a new constant, called the Banach-Mazur constant, to describe the geometry of a normed vector space; (ii) showing that, for a nontrivial projection PP, it is possible to use the Banach-Mazur constant to improve upon the naive estimate ∥I−P∥≤1+∥P∥ \| I - P \| \leq 1 + \| P \| ; and (iii) applying that improved estimate to the Petrov-Galerkin projection operator. This generalizes and extends a 2003 result of Xu and Zikatanov for the special case of Hilbert spaces.Comment: 9 pages; v2: added new section on application to Lp and Sobolev space

    Discretization of Linear Problems in Banach Spaces: Residual Minimization, Nonlinear Petrov-Galerkin, and Monotone Mixed Methods

    Full text link
    This work presents a comprehensive discretization theory for abstract linear operator equations in Banach spaces. The fundamental starting point of the theory is the idea of residual minimization in dual norms, and its inexact version using discrete dual norms. It is shown that this development, in the case of strictly-convex reflexive Banach spaces with strictly-convex dual, gives rise to a class of nonlinear Petrov-Galerkin methods and, equivalently, abstract mixed methods with monotone nonlinearity. Crucial in the formulation of these methods is the (nonlinear) bijective duality map. Under the Fortin condition, we prove discrete stability of the abstract inexact method, and subsequently carry out a complete error analysis. As part of our analysis, we prove new bounds for best-approximation projectors, which involve constants depending on the geometry of the underlying Banach space. The theory generalizes and extends the classical Petrov-Galerkin method as well as existing residual-minimization approaches, such as the discontinuous Petrov-Galerkin method.Comment: 43 pages, 2 figure

    First order least squares method with weakly imposed boundary condition for convection dominated diffusion problems

    Full text link
    We present and analyze a first order least squares method for convection dominated diffusion problems, which provides robust L2 a priori error estimate for the scalar variable even if the given data f in L2 space. The novel theoretical approach is to rewrite the method in the framework of discontinuous Petrov - Galerkin (DPG) method, and then show numerical stability by using a key equation discovered by J. Gopalakrishnan and W. Qiu [Math. Comp. 83(2014), pp. 537-552]. This new approach gives an alternative way to do numerical analysis for least squares methods for a large class of differential equations. We also show that the condition number of the global matrix is independent of the diffusion coefficient. A key feature of the method is that there is no stabilization parameter chosen empirically. In addition, Dirichlet boundary condition is weakly imposed. Numerical experiments verify our theoretical results and, in particular, show our way of weakly imposing Dirichlet boundary condition is essential to the design of least squares methods - numerical solutions on subdomains away from interior layers or boundary layers have remarkable accuracy even on coarse meshes, which are unstructured quasi-uniform

    Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations

    Full text link
    We analyze the convergence of compressive sensing based sampling techniques for the efficient evaluation of functionals of solutions for a class of high-dimensional, affine-parametric, linear operator equations which depend on possibly infinitely many parameters. The proposed algorithms are based on so-called "non-intrusive" sampling of the high-dimensional parameter space, reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a functional of the parametric solution is then computed via compressive sensing methods from samples of functionals of the solution. A key ingredient in our analysis of independent interest consists in a generalization of recent results on the approximate sparsity of generalized polynomial chaos representations (gpc) of the parametric solution families, in terms of the gpc series with respect to tensorized Chebyshev polynomials. In particular, we establish sufficient conditions on the parametric inputs to the parametric operator equation such that the Chebyshev coefficients of the gpc expansion are contained in certain weighted ℓp\ell_p-spaces for 0<p≤10<p\leq 1. Based on this we show that reconstructions of the parametric solutions computed from the sampled problems converge, with high probability, at the L2L_2, resp. L∞L_\infty convergence rates afforded by best ss-term approximations of the parametric solution up to logarithmic factors.Comment: revised version, 27 page
    • …
    corecore