2,038 research outputs found

    The cost of continuity: performance of iterative solvers on isogeometric finite elements

    Full text link
    In this paper we study how the use of a more continuous set of basis functions affects the cost of solving systems of linear equations resulting from a discretized Galerkin weak form. Specifically, we compare performance of linear solvers when discretizing using C0C^0 B-splines, which span traditional finite element spaces, and Cp−1C^{p-1} B-splines, which represent maximum continuity. We provide theoretical estimates for the increase in cost of the matrix-vector product as well as for the construction and application of black-box preconditioners. We accompany these estimates with numerical results and study their sensitivity to various grid parameters such as element size hh and polynomial order of approximation pp. Finally, we present timing results for a range of preconditioning options for the Laplace problem. We conclude that the matrix-vector product operation is at most \slfrac{33p^2}{8} times more expensive for the more continuous space, although for moderately low pp, this number is significantly reduced. Moreover, if static condensation is not employed, this number further reduces to at most a value of 8, even for high pp. Preconditioning options can be up to p3p^3 times more expensive to setup, although this difference significantly decreases for some popular preconditioners such as Incomplete LU factorization

    The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    Get PDF
    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336

    Sparse preconditioning for model predictive control

    Full text link
    We propose fast O(N) preconditioning, where N is the number of gridpoints on the prediction horizon, for iterative solution of (non)-linear systems appearing in model predictive control methods such as forward-difference Newton-Krylov methods. The Continuation/GMRES method for nonlinear model predictive control, suggested by T. Ohtsuka in 2004, is a specific application of the Newton-Krylov method, which uses the GMRES iterative algorithm to solve a forward difference approximation of the optimality equations on every time step.Comment: 6 pages, 5 figures, to appear in proceedings of the American Control Conference 2016, July 6-8, Boston, MA, USA. arXiv admin note: text overlap with arXiv:1509.0286

    Using Jacobi iterations and blocking for solving sparse triangular systems in incomplete factorization preconditioning

    Get PDF
    When using incomplete factorization preconditioners with an iterative method to solve large sparse linear systems, each application of the preconditioner involves solving two sparse triangular systems. These triangular systems are challenging to solve efficiently on computers with high levels of concurrency. On such computers, it has recently been proposed to use Jacobi iterations, which are highly parallel, to approximately solve the triangular systems from incomplete factorizations. The effectiveness of this approach, however, is problem-dependent: the Jacobi iterations may not always converge quickly enough for all problems. Thus, as a necessary and important step to evaluate this approach, we experimentally test the approach on a large number of realistic symmetric positive definite problems. We also show that by using block Jacobi iterations, we can extend the range of problems for which such an approach can be effective. For block Jacobi iterations, it is essential for the blocking to be cognizant of the matrix structure
    • …
    corecore