15,566 research outputs found

    Final Report - Summer Visit 2011

    Get PDF
    During my visit to LLNL during the summer of 2010, I worked on algebraic multilevel solvers for large sparse systems of linear equations arising from discretizations of partial differential equations. The particular solver of interest is based on ILU decomposition. The setup phase for this AMG solve is just the single ILU decomposition, and its corresponding error matrix. Because the ILU uses a minimum degree or similar sparse matrix ordering, most of the fill-in, and hence most of the error, is concentrated in the lower right corner of the factored matrix. All of the major multigrid components - the smoother, the coarse level correction matrices, and the fine-to-coarse and coarse-to-fine rectangular transfer matrices, are defined in terms of various blocks of the ILU factorization. Although such a strategy is not likely to be optimal in terms of convergence properties, it has a relatively low setup cost, and therefore is useful in situations where setup costs for more traditional AMG approaches cannot be amortized over the solution of many linear systems using the same matrix. Such a situation arises in adaptive methods, where often just one linear system is solved at each step of an adaptive feedback loop, or in solving nonlinear equations by approximate Newton methods, where the approximate Jacobian might change substantially from iteration to iteration. In general terms, coarse levels are defined in terms of successively smaller lower right blocks of the matrix, typically decreasing geometrically in order. The most difficult issue was the coarse grid correction matrix. The preconditioner/smoother for a given level is just the corresponding lower right blocks of the ILU factorization. The coarse level matrix itself is just the Schur complement; this matrix is not known exactly using just the ILU decomposition in the setup phase. Thus we approximate this matrix using various combinations of the preconditioning matrix and the error matrix. During my visit, several approximations of this type were implemented and tested. While some improved the convergence rate of the overall method, these gains had to be balanced against the additional costs involved in creating and applying these matrices. By this more stringent criterion, none of the improved approximations could be characterized as an unqualified success

    Polar Varieties and Efficient Real Equation Solving: The Hypersurface Case

    Full text link
    The objective of this paper is to show how the recently proposed method by Giusti, Heintz, Morais, Morgenstern, Pardo \cite{gihemorpar} can be applied to a case of real polynomial equation solving. Our main result concerns the problem of finding one representative point for each connected component of a real bounded smooth hypersurface. The algorithm in \cite{gihemorpar} yields a method for symbolically solving a zero-dimensional polynomial equation system in the affine (and toric) case. Its main feature is the use of adapted data structure: Arithmetical networks and straight-line programs. The algorithm solves any affine zero-dimensional equation system in non-uniform sequential time that is polynomial in the length of the input description and an adequately defined {\em affine degree} of the equation system. Replacing the affine degree of the equation system by a suitably defined {\em real degree} of certain polar varieties associated to the input equation, which describes the hypersurface under consideration, and using straight-line program codification of the input and intermediate results, we obtain a method for the problem introduced above that is polynomial in the input length and the real degree.Comment: Late

    Convergence of simple adaptive Galerkin schemes based on h − h/2 error estimators

    Get PDF
    We discuss several adaptive mesh-refinement strategies based on (h − h/2)-error estimation. This class of adaptivemethods is particularly popular in practise since it is problem independent and requires virtually no implementational overhead. We prove that, under the saturation assumption, these adaptive algorithms are convergent. Our framework applies not only to finite element methods, but also yields a first convergence proof for adaptive boundary element schemes. For a finite element model problem, we extend the proposed adaptive scheme and prove convergence even if the saturation assumption fails to hold in general
    corecore