14 research outputs found
Block-diagonal and constraint preconditioners for nonsymmetric indefinite linear systems : Part I: Theory
We study block-diagonal preconditioners and an efficient variant of constraint preconditioners for general two-by-two block linear systems with zero (2,2)-block. We derive block-diagonal preconditioners from a splitting of the (1,1)-block of the matrix. From the resulting preconditioned system we derive a smaller, so-called related system that yields the solution of the original problem. Solving the related system corresponds to an efficient implementation of constraint preconditioning. We analyze the properties of both classes of preconditioned matrices, in particular their spectra. Using analytical results, we show that the related system matrix has the more favorable spectrum, which in many applications translates into faster convergence for Krylov subspace methods. We show that fast convergence depends mainly on the quality of the splitting, a topic for which a substantial body of theory exists. Our analysis also provides a number of new relations between block-diagonal preconditioners and constraint preconditioners. For constrained problems, solving the related system produces iterates that satisfy the constraints exactly, just as for systems with a constraint preconditioner. Finally, for the Lagrange multiplier formulation of a constrained optimization problem we show how scaling nonlinear constraints can dramatically improve the convergence for linear systems in a Newton iteration. Our theoretical results are confirmed by numerical experiments on a constrained optimization problem.
We consider the general, nonsymmetric, nonsingular case. Our only additional requirement is the nonsingularity of the Schur-complement--type matrix derived from the splitting that defines the preconditioners. In particular, the (1,2)-block need not equal the transposed (2,1)-block, and the (1,1)-block might be indefinite or even singular. This is the first paper in a two-part sequence. In the second paper we will study the use of our preconditioners in a variety of applications
Multilevel Solvers for Unstructured Surface Meshes
Parameterization of unstructured surface meshes is of fundamental importance in many applications of digital geometry processing. Such parameterization approaches give rise to large and exceedingly ill-conditioned systems which are difficult or impossible to solve without the use of sophisticated multilevel preconditioning strategies. Since the underlying meshes are very fine to begin with, such multilevel preconditioners require mesh coarsening to build an appropriate hierarchy. In this paper we consider several strategies for the construction of hierarchies using ideas from mesh simplification algorithms used in the computer graphics literature. We introduce two novel hierarchy construction schemes and demonstrate their superior performance when used in conjunction with a multigrid preconditioner
Nonlinear multigrid based on local spectral coarsening for heterogeneous diffusion problems
This work develops a nonlinear multigrid method for diffusion problems
discretized by cell-centered finite volume methods on general unstructured
grids. The multigrid hierarchy is constructed algebraically using aggregation
of degrees of freedom and spectral decomposition of reference linear operators
associated with the aggregates. For rapid convergence, it is important that the
resulting coarse spaces have good approximation properties. In our approach,
the approximation quality can be directly improved by including more spectral
degrees of freedom in the coarsening process. Further, by exploiting local
coarsening and a piecewise-constant approximation when evaluating the nonlinear
component, the coarse level problems are assembled and solved without ever
re-visiting the fine level, an essential element for multigrid algorithms to
achieve optimal scalability. Numerical examples comparing relative performance
of the proposed nonlinear multigrid solvers with standard single-level
approaches -- Picard's and Newton's methods -- are presented. Results show that
the proposed solver consistently outperforms the single-level methods, both in
efficiency and robustness
Preconditioners for Krylov subspace methods: An overview
When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind
Hessian Matrix-Free Lagrange-Newton-Krylov-Schur-Schwarz Methods for Elliptic Inverse Problems
This study focuses on the solution of inverse problems for elliptic systems. The inverse problem is constructed as a PDE-constrained optimization, where the cost function is the L2 norm of the difference between the measured data and the predicted state variable, and the constraint is an elliptic PDE. Particular examples of the system considered in this stud, are groundwater flow and radiation transport. The inverse problems are typically ill-posed due to error in measurements of the data. Regularization methods are employed to partially alleviate this problem. The PDE-constrained optimization is formulated as the minimization of a Lagrangian functional, formed from the regularized cost function and the discretized PDE, with respect to the parameters, the state variables, and the Lagrange multipliers. Our approach is known as an all at once method. An algorithm is proposed for an inverse problem that is capable of being extended to large scales. To overcome storage limitations, we develop a parallel preconditioned Newton-Krylov method employed in a Hessian-free manner. The preconditioners have an inner-outer structure, taking the form of a Schur complement (block factorization) at the outer level and Schwarz projections at the inner level. However, building an exact Schur complement is prohibitively expensive. Thus, we use Schur complement approximations, including the identity, probing, the Laplacian, the J operator, and a BFGS operator. For exact data the exact Schur complements are superior to the inexact approximations. However, for data with noise the inexact methods are competitive to or even better than the exact in every computational aspect. We also find that nousymmetric forms of the Karush-Kuhn-Tucker matrices and preconditioners are competitive to or better than the symmetric forms that are commonly used in the optimization community. In this study, iterative Tikhonov and Total Variation regularizations are proposed and compared to the standard regularizations and each other. For exact data with jump discontinuities the standard and iterative Total Variation regulations are superior to the standard and iterative Tikhonov regularizations. However, in the case of noisy data the proposed iterative Tikhonov regularizations are superior to the standard and iterative Total Variation methods. We also show that in some cases the iterative regularizations are better than the noniterative. To demonstrate the performance of the algorithm, including the effectiveness of the preconditioners and regularizations, synthetic one- and two-dimensional elliptic inverse problems are solved, and we also compare with other methodologies that are available in the literature. The proposed algorithm performs well with regard to robustness, reconstructs the parameter models effectively, and is easily implemented in the framework of the available parallel PDE software PETSc and the automatic differentiation software ADIC. The algorithm is also extendable to three-dimensional problems