1,058 research outputs found

    Higher order weakly over-penalized symmetric interior penalty methods

    Get PDF
    In this paper we study higher order weakly over-penalized symmetric interior penalty methods for second-order elliptic boundary value problems in two dimensions. We derive hp error estimates in both the energy norm and the norm and present numerical results that corroborate the theoretical results. © 2012 Elsevier B.V. All rights reserved. L

    Convergence Analysis of the Lowest Order Weakly Penalized Adaptive Discontinuous Galerkin Methods

    Full text link
    In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods

    Robust Estimation of High-Dimensional Mean Regression

    Full text link
    Data subject to heavy-tailed errors are commonly encountered in various scientific fields, especially in the modern era with explosion of massive data. To address this problem, procedures based on quantile regression and Least Absolute Deviation (LAD) regression have been devel- oped in recent years. These methods essentially estimate the conditional median (or quantile) function. They can be very different from the conditional mean functions when distributions are asymmetric and heteroscedastic. How can we efficiently estimate the mean regression functions in ultra-high dimensional setting with existence of only the second moment? To solve this problem, we propose a penalized Huber loss with diverging parameter to reduce biases created by the traditional Huber loss. Such a penalized robust approximate quadratic (RA-quadratic) loss will be called RA-Lasso. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, our results reveal that the RA-lasso estimator produces a consistent estimator at the same rate as the optimal rate under the light-tail situation. We further study the computational convergence of RA-Lasso and show that the composite gradient descent algorithm indeed produces a solution that admits the same optimal rate after sufficient iterations. As a byproduct, we also establish the concentration inequality for estimat- ing population mean when there exists only the second moment. We compare RA-Lasso with other regularized robust estimators based on quantile regression and LAD regression. Extensive simulation studies demonstrate the satisfactory finite-sample performance of RA-Lasso

    An adaptive discontinuous finite volume method for elliptic problems

    Get PDF
    AbstractAn adaptive discontinuous finite volume method is developed and analyzed in this paper. We prove that the adaptive procedure achieves guaranteed error reduction in a mesh-dependent energy norm and has a linear convergence rate. Numerical results are also presented to illustrate the theoretical analysis

    Discontinuous Galerkin method for the spherically reduced BSSN system with second-order operators

    Get PDF
    We present a high-order accurate discontinuous Galerkin method for evolving the spherically-reduced Baumgarte-Shapiro-Shibata-Nakamura (BSSN) system expressed in terms of second-order spatial operators. Our multi-domain method achieves global spectral accuracy and long-time stability on short computational domains. We discuss in detail both our scheme for the BSSN system and its implementation. After a theoretical and computational verification of the proposed scheme, we conclude with a brief discussion of issues likely to arise when one considers the full BSSN system.Comment: 35 pages, 6 figures, 1 table, uses revtex4. Revised in response to referee's repor

    Multiplicative Schwarz methods for discontinuous Galerkin approximations of elliptic problems

    Get PDF
    In this paper we introduce and analyze some non-overlapping multiplicative Schwarz methods for discontinuous Galerkin (DG) approximations of elliptic problems. The construction of the Schwarz preconditioners is presented in a unified framework for a wide class of DG methods. For symmetric DG approximations we provide optimal convergence bounds for the corresponding error propagation operator, and we show that the resulting methods can be accelerated by using suitable Krylov space solvers. A discussion on the issue of preconditioning non-symmetric DG approximations of elliptic problems is also included. Extensive numerical experiments to confirm the theoretical results and to assess the robustness and the efficiency of the proposed preconditioners are provided. © 2008 EDP Sciences SMAI
    corecore