1,669 research outputs found
A Multigrid Optimization Algorithm for the Numerical Solution of Quasilinear Variational Inequalities Involving the -Laplacian
In this paper we propose a multigrid optimization algorithm (MG/OPT) for the
numerical solution of a class of quasilinear variational inequalities of the
second kind. This approach is enabled by the fact that the solution of the
variational inequality is given by the minimizer of a nonsmooth energy
functional, involving the -Laplace operator. We propose a Huber
regularization of the functional and a finite element discretization for the
problem. Further, we analyze the regularity of the discretized energy
functional, and we are able to prove that its Jacobian is slantly
differentiable. This regularity property is useful to analyze the convergence
of the MG/OPT algorithm. In fact, we demostrate that the algorithm is globally
convergent by using a mean value theorem for semismooth functions. Finally, we
apply the MG/OPT algorithm to the numerical simulation of the viscoplastic flow
of Bingham, Casson and Herschel-Bulkley fluids in a pipe. Several experiments
are carried out to show the efficiency of the proposed algorithm when solving
this kind of fluid mechanics problems
Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization
This paper proposes and justifies two globally convergent Newton-type methods
to solve unconstrained and constrained problems of nonsmooth optimization by
using tools of variational analysis and generalized differentiation. Both
methods are coderivative-based and employ generalized Hessians (coderivatives
of subgradient mappings) associated with objective functions, which are either
of class , or are represented in the form of convex
composite optimization, where one of the terms may be extended-real-valued. The
proposed globally convergent algorithms are of two types. The first one extends
the damped Newton method and requires positive-definiteness of the generalized
Hessians for its well-posedness and efficient performance, while the other
algorithm is of {the regularized Newton type} being well-defined when the
generalized Hessians are merely positive-semidefinite. The obtained convergence
rates for both methods are at least linear, but become superlinear under the
semismooth property of subgradient mappings. Problems of convex composite
optimization are investigated with and without the strong convexity assumption
{on smooth parts} of objective functions by implementing the machinery of
forward-backward envelopes. Numerical experiments are conducted for Lasso
problems and for box constrained quadratic programs with providing performance
comparisons of the new algorithms and some other first-order and second-order
methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
We introduce Bella, a locally superlinearly convergent Bregman forward
backward splitting method for minimizing the sum of two nonconvex functions,
one of which satisfying a relative smoothness condition and the other one
possibly nonsmooth. A key tool of our methodology is the Bregman
forward-backward envelope (BFBE), an exact and continuous penalty function with
favorable first- and second-order properties, and enjoying a nonlinear error
bound when the objective function satisfies a Lojasiewicz-type property. The
proposed algorithm is of linesearch type over the BFBE along candidate update
directions, and converges subsequentially to stationary points, globally under
a KL condition, and owing to the given nonlinear error bound can attain
superlinear convergence rates even when the limit point is a nonisolated
minimum, provided the directions are suitably selected
Generalized Newton's Method based on Graphical Derivatives
This paper concerns developing a numerical method of the Newton type to solve
systems of nonlinear equations described by nonsmooth continuous functions. We
propose and justify a new generalized Newton algorithm based on graphical
derivatives, which have never been used to derive a Newton-type method for
solving nonsmooth equations. Based on advanced techniques of variational
analysis and generalized differentiation, we establish the well-posedness of
the algorithm, its local superlinear convergence, and its global convergence of
the Kantorovich type. Our convergence results hold with no semismoothness
assumption, which is illustrated by examples. The algorithm and main results
obtained in the paper are compared with well-recognized semismooth and
-differentiable versions of Newton's method for nonsmooth Lipschitzian
equations
Global q-superlinear convergence of the infinite-dimensional Newton's method for the regularized p-Stokes equations
The motion of glaciers can be simulated with the p-Stokes equations. We
present an algorithm that solves these equations faster than the Picard
iteration. We do that by proving q-superlinear global convergence of the
infinite-dimensional Newton's method with Armijo step sizes to the solution of
these equations. We only have to add an arbitrarily small diffusion term for
this convergence result. We also consider approximations of exact step sizes.
Exact step sizes are possible because we reformulate the problem as minimizing
a convex functional. Next, we prove that the additional diffusion term only
causes minor differences in the solution compared to the original p-Stokes
equations. Finally, we test our algorithms on a reformulation of the experiment
ISMIP-HOM B. The approximation of exact step sizes for the Picard iteration and
Newton's method is superior in the experiment compared to the Picard iteration.
Also, Newton's method with Armijo step sizes converges faster than the Picard
iteration. However, the reached accuracy of Newton's method with Armijo step
sizes depends more on the resolution of the domain.Comment: 26 pages, 4 figure
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
- …