126 research outputs found
Linearly Preconditioned Nonlinear Solvers for Phase Field Equations Involving p-Laplacian Terms
Phase field models are usually constructed to model certain interfacial dynamics. Numerical simulations of phase-field models require long time accuracy, stability and therefore it is necessary to develop efficient and highly accurate numerical methods. In particular, the unconditionally energy stable , unconditionally solvable, and accurate schemes and fast solvers are desirable.
In this thesis, We describe and analyze preconditioned steepest descent (PSD) solvers for fourth and sixth-order nonlinear elliptic equations that include p-Laplacian terms on periodic domains in 2 and 3 dimensions. Such nonlinear elliptic equations often arise from time discretization of parabolic equations that model various biological and physical phenomena, in particular, liquid crystals, thin film epitaxial growth and phase transformations. The analyses of the schemes involve the characterization of the strictly convex energies associated with the equations. We first give a general framework for PSD in Hilbert spaces. Based on certain reasonable assumptions of the linear pre-conditioner, a geometric convergence rate is shown for the nonlinear PSD iteration. We then apply the general theory to the fourth and sixth-order problems of interest, making use of Sobolev embedding and regularity results to confirm the appropriateness of our pre-conditioners for the regularized p-Lapacian problems. The results include a sharper theoretical convergence result for p-Laplacian systems compared to what may be found in existing works. We demonstrate rigorously how to apply the theory in the finite dimensional setting using finite difference discretization methods.
Based on the PSD framework, we also proposed two efficient and practical Preconditioned Nonlinear Conjugate Gradient (PNCG) solvers. The main idea of the preconditioned solvers is to use a linearized version of the nonlinear operator as a metric for choosing the initial search direction. And the hybrid conjugate directions as the following search direction. In order to make the proposed solvers and scheme much more practical, we also investigate an adaptive time stepping strategy for time dependent problems.
Numerical simulations for some important physical application problems – including thin film epitaxy with slope selection, the square phase field crystal model and functionalized Cahn-Hilliard equation – are carried out to verify the efficiency of the schemes and solvers
Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure
The numerical solution of partial differential equations on high-dimensional
domains gives rise to computationally challenging linear systems. When using
standard discretization techniques, the size of the linear system grows
exponentially with the number of dimensions, making the use of classic
iterative solvers infeasible. During the last few years, low-rank tensor
approaches have been developed that allow to mitigate this curse of
dimensionality by exploiting the underlying structure of the linear operator.
In this work, we focus on tensors represented in the Tucker and tensor train
formats. We propose two preconditioned gradient methods on the corresponding
low-rank tensor manifolds: A Riemannian version of the preconditioned
Richardson method as well as an approximate Newton scheme based on the
Riemannian Hessian. For the latter, considerable attention is given to the
efficient solution of the resulting Newton equation. In numerical experiments,
we compare the efficiency of our Riemannian algorithms with other established
tensor-based approaches such as a truncated preconditioned Richardson method
and the alternating linear scheme. The results show that our approximate
Riemannian Newton scheme is significantly faster in cases when the application
of the linear operator is expensive.Comment: 24 pages, 8 figure
Composing Scalable Nonlinear Algebraic Solvers
Most efficient linear solvers use composable algorithmic components, with the
most common model being the combination of a Krylov accelerator and one or more
preconditioners. A similar set of concepts may be used for nonlinear algebraic
systems, where nonlinear composition of different nonlinear solvers may
significantly improve the time to solution. We describe the basic concepts of
nonlinear composition and preconditioning and present a number of solvers
applicable to nonlinear partial differential equations. We have developed a
software framework in order to easily explore the possible combinations of
solvers. We show that the performance gains from using composed solvers can be
substantial compared with gains from standard Newton-Krylov methods.Comment: 29 pages, 14 figures, 13 table
A high-order local discontinuous Galerkin method for the -Laplace equation
We study the high-order local discontinuous Galerkin (LDG) method for the
-Laplace equation. We reformulate our spatial discretization as an
equivalent convex minimization problem and use a preconditioned gradient
descent method as the nonlinear solver. For the first time, a weighted
preconditioner that provides -independent convergence is applied in the LDG
setting. For polynomial order , we rigorously establish the
solvability of our scheme and provide a priori error estimates in a
mesh-dependent energy norm. Our error estimates are under a different and
non-equivalent distance from existing LDG results. For arbitrarily high-order
polynomials under the assumption that the exact solution has enough regularity,
the error estimates demonstrate the potential for high-order accuracy. Our
numerical results exhibit the desired convergence speed facilitated by the
preconditioner, and we observe best convergence rates in gradient variables in
alignment with linear LDG, and optimal rates in the primal variable when .Comment: 36 pages, 36 figure
Convergence Analysis of the Fast Subspace Descent Methods for Convex Optimization Problems
The full approximation storage (FAS) scheme is a widely used multigrid method
for nonlinear problems. In this paper, a new framework to design and analyze
FAS-like schemes for convex optimization problems is developed. The new method,
the Fast Subspace Descent (FASD) scheme, which generalizes classical FAS, can
be recast as an inexact version of nonlinear multigrid methods based on space
decomposition and subspace correction. The local problem in each subspace can
be simplified to be linear and one gradient descent iteration (with an
appropriate step size) is enough to ensure a global linear (geometric)
convergence of FASD.Comment: 33 page
- …