14 research outputs found

    Convergence Analysis of the Lowest Order Weakly Penalized Adaptive Discontinuous Galerkin Methods

    Full text link
    In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods

    Convergence of an adaptive mixed finite element method for general second order linear elliptic problems

    Full text link
    The convergence of an adaptive mixed finite element method for general second order linear elliptic problems defined on simply connected bounded polygonal domains is analyzed in this paper. The main difficulties in the analysis are posed by the non-symmetric and indefinite form of the problem along with the lack of the orthogonality property in mixed finite element methods. The important tools in the analysis are a posteriori error estimators, quasi-orthogonality property and quasi-discrete reliability established using representation formula for the lowest-order Raviart-Thomas solution in terms of the Crouzeix-Raviart solution of the problem. An adaptive marking in each step for the local refinement is based on the edge residual and volume residual terms of the a posteriori estimator. Numerical experiments confirm the theoretical analysis.Comment: 24 pages, 8 figure

    Convergence and optimality of the adaptive nonconforming linear element method for the Stokes problem

    Full text link
    In this paper, we analyze the convergence and optimality of a standard adaptive nonconforming linear element method for the Stokes problem. After establishing a special quasi--orthogonality property for both the velocity and the pressure in this saddle point problem, we introduce a new prolongation operator to carry through the discrete reliability analysis for the error estimator. We then use a specially defined interpolation operator to prove that, up to oscillation, the error can be bounded by the approximation error within a properly defined nonlinear approximate class. Finally, by introducing a new parameter-dependent error estimator, we prove the convergence and optimality estimates

    Recurrent Neural Networks as Optimal Mesh Refinement Strategies

    Get PDF
    We show that an optimal finite element mesh refinement algorithm for a prototypical elliptic PDE can be learned by a recurrent neural network with a fixed number of trainable parameters independent of the desired accuracy and the input size, i.e., number of elements of the mesh. Moreover, for a general class of PDEs with solutions which are well-approximated by deep neural networks, we show that an optimal mesh refinement strategy can be learned by recurrent neural networks. This includes problems for which no optimal adaptive strategy is known yet
    corecore