1,067 research outputs found
On affine scaling inexact dogleg methods for bound-constrained nonlinear systems
Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given
Deflation for semismooth equations
Variational inequalities can in general support distinct solutions. In this
paper we study an algorithm for computing distinct solutions of a variational
inequality, without varying the initial guess supplied to the solver. The
central idea is the combination of a semismooth Newton method with a deflation
operator that eliminates known solutions from consideration. Given one root of
a semismooth residual, deflation constructs a new problem for which a
semismooth Newton method will not converge to the known root, even from the
same initial guess. This enables the discovery of other roots. We prove the
effectiveness of the deflation technique under the same assumptions that
guarantee locally superlinear convergence of a semismooth Newton method. We
demonstrate its utility on various finite- and infinite-dimensional examples
drawn from constrained optimization, game theory, economics and solid
mechanics.Comment: 24 pages, 3 figure
An L1 Penalty Method for General Obstacle Problems
We construct an efficient numerical scheme for solving obstacle problems in
divergence form. The numerical method is based on a reformulation of the
obstacle in terms of an L1-like penalty on the variational problem. The
reformulation is an exact regularizer in the sense that for large (but finite)
penalty parameter, we recover the exact solution. Our formulation is applied to
classical elliptic obstacle problems as well as some related free boundary
problems, for example the two-phase membrane problem and the Hele-Shaw model.
One advantage of the proposed method is that the free boundary inherent in the
obstacle problem arises naturally in our energy minimization without any need
for problem specific or complicated discretization. In addition, our scheme
also works for nonlinear variational inequalities arising from convex
minimization problems.Comment: 20 pages, 18 figure
- …