823 research outputs found

    Nonsmooth Newton methods for set-valued saddle point problems

    Get PDF
    We present a new class of iterative schemes for large scale set-valued saddle point problems as arising, e.g., from optimization problems in the presence of linear and inequality constraints. Our algorithms can be regarded either as nonsmooth Newton-type methods for the nonlinear Schur complement or as Uzawa-type iterations with active set preconditioners. Numerical experiments with a control constrained optimal control problem and a discretized Cahn–Hilliard equation with obstacle potential illustrate the reliability and efficiency of the new approach

    Nonsmooth Schur-Newton methods for vector-valued Cahn-Hilliard equations

    Get PDF
    We present globally convergent nonsmooth Schur-Newton methods for the solution of discrete vector-valued Cahn-Hilliard equations with logarithmic and obstacle potentials. The method solves the nonlinear set-valued saddle-point problems as arising from discretization by implicit Euler methods in time and first order finite elements in space without regularization. Efficiency and robustness of the convergence speed for vanishing temperature is illustrated by numerical experiments

    Nonsmooth Schur-Newton methods for multicomponent Cahn-Hilliard systems

    Get PDF
    We present globally convergent nonsmooth Schur–Newton methods for the solution of discrete multicomponent Cahn–Hilliard systems with logarithmic and obstacle potentials. The method solves the nonlinear set-valued saddle-point problems arising from discretization by implicit Euler methods in time and first-order finite elements in space without regularization. Efficiency and robustness of the convergence speed for vanishing temperature is illustrated by numerical experiments

    Lagrange optimality system for a class of nonsmooth convex optimization

    Get PDF
    In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard optimality condition and the saddle point condition of the augmented Lagrangian, which provides a powerful tool for developing numerical algorithms. We apply a linear Newton method to the Lagrange optimality system to obtain a novel algorithm applicable to a variety of nonsmooth convex optimization problems arising in practical applications. Under suitable conditions, we prove the nonsingularity of the Newton system and the local convergence of the algorithm.Comment: 19 page

    Nonsmooth Schur-Newton methods for nonsmooth saddle point problems.

    Get PDF
    We introduce and analyze nonsmooth Schur-Newton methods for a class of nonsmooth saddle point problems. The method is able to solve problems where the primal energy decomposes into a convex smooth part and a convex separable but nonsmooth part. The method is based on nonsmooth Newton techniques for an equivalent unconstrained dual problem. Using this we show that it is globally convergent even for inexact evaluation of the linear subproblems

    Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems

    Get PDF
    The present article presents a summarizing view at differential-algebraic equations (DAEs) and analyzes how new application fields and corresponding mathematical models lead to innovations both in theory and in numerical analysis for this problem class. Recent numerical methods for nonsmooth dynamical systems subject to unilateral contact and friction illustrate the topicality of this development.Comment: Preprint of Book Chapte

    A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima

    Full text link
    We introduce Bella, a locally superlinearly convergent Bregman forward backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness condition and the other one possibly nonsmooth. A key tool of our methodology is the Bregman forward-backward envelope (BFBE), an exact and continuous penalty function with favorable first- and second-order properties, and enjoying a nonlinear error bound when the objective function satisfies a Lojasiewicz-type property. The proposed algorithm is of linesearch type over the BFBE along candidate update directions, and converges subsequentially to stationary points, globally under a KL condition, and owing to the given nonlinear error bound can attain superlinear convergence rates even when the limit point is a nonisolated minimum, provided the directions are suitably selected
    • …
    corecore