129 research outputs found
A class of nonsymmetric preconditioners for saddle point problems
For iterative solution of saddle point problems, a nonsymmetric preconditioning is studied which, with respect to the upper-left block of the system matrix, can be seen as a variant of SSOR. An idealized situation where the SSOR is taken with respect to the skew-symmetric part plus the diagonal part of the upper-left block is analyzed in detail. Since action of the preconditioner involves solution of a Schur complement system, an inexact form of the preconditioner can be of interest. This results in an inner-outer iterative process. Numerical experiments with solution of linearized Navier-Stokes equations demonstrate efficiency of the new preconditioner, especially when the left-upper block is far from symmetric
A fast normal splitting preconditioner for attractive coupled nonlinear Schr\"odinger equations with fractional Laplacian
A linearly implicit conservative difference scheme is applied to discretize
the attractive coupled nonlinear Schr\"odinger equations with fractional
Laplacian. Complex symmetric linear systems can be obtained, and the system
matrices are indefinite and Toeplitz-plus-diagonal. Neither efficient
preconditioned iteration method nor fast direct method is available to deal
with these systems. In this paper, we propose a novel matrix splitting
iteration method based on a normal splitting of an equivalent real block form
of the complex linear systems. This new iteration method converges
unconditionally, and the quasi-optimal iteration parameter is deducted. The
corresponding new preconditioner is obtained naturally, which can be
constructed easily and implemented efficiently by fast Fourier transform.
Theoretical analysis indicates that the eigenvalues of the preconditioned
system matrix are tightly clustered. Numerical experiments show that the new
preconditioner can significantly accelerate the convergence rate of the Krylov
subspace iteration methods. Specifically, the convergence behavior of the
related preconditioned GMRES iteration method is spacial mesh-size-independent,
and almost fractional order insensitive. Moreover, the linearly implicit
conservative difference scheme in conjunction with the preconditioned GMRES
iteration method conserves the discrete mass and energy in terms of a given
precision
Diagonal and normal with Toeplitz-block splitting iteration method for space fractional coupled nonlinear Schr\"odinger equations with repulsive nonlinearities
By applying the linearly implicit conservative difference scheme proposed in
[D.-L. Wang, A.-G. Xiao, W. Yang. J. Comput. Phys. 2014;272:670-681], the
system of repulsive space fractional coupled nonlinear Schr\"odinger equations
leads to a sequence of linear systems with complex symmetric and
Toeplitz-plus-diagonal structure. In this paper, we propose the diagonal and
normal with Toeplitz-block splitting iteration method to solve the above linear
systems. The new iteration method is proved to converge unconditionally, and
the optimal iteration parameter is deducted. Naturally, this new iteration
method leads to a diagonal and normal with circulant-block preconditioner which
can be executed efficiently by fast algorithms. In theory, we provide sharp
bounds for the eigenvalues of the discrete fractional Laplacian and its
circulant approximation, and further analysis indicates that the spectral
distribution of the preconditioned system matrix is tight. Numerical
experiments show that the new preconditioner can significantly improve the
computational efficiency of the Krylov subspace iteration methods. Moreover,
the corresponding preconditioned GMRES method shows space mesh size independent
and almost fractional order parameter insensitive convergence behaviors
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT
On block diagonal and block triangular iterative schemes and preconditioners for stabilized saddle point problems
We review the use of block diagonal and block lower/upper triangular splittings for constructing iterative methods and preconditioners for solving stabilized saddle point problems. We introduce new variants of these splittings and obtain new results on the convergence of the associated stationary iterations and new bounds on the eigenvalues of the corresponding preconditioned matrices. We further consider inexact versions as preconditioners for flexible Krylov subspace methods, and show experimentally that our techniques can be highly effective for solving linear systems of saddle point type arising from stabilized finite element discretizations of two model problems, one from incompressible fluid mechanics and the other from magnetostatics
Preconditioners for Krylov subspace methods: An overview
When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind
- …