41 research outputs found

    The smoothing function of the nonsmooth matrix valued function

    Get PDF
    Master'sMASTER OF SCIENC

    A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization

    Full text link
    We propose a novel trust region method for solving a class of nonsmooth and nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence results. We further derive new normal map-based representations of the associated second-order optimality conditions that have direct connections to the local assumptions required for fast convergence. Finally, we study the behavior of our algorithm when the Hessian matrix of the smooth part of the objective function is approximated by BFGS updates. We successfully link the KL theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type condition to show superlinear convergence of the quasi-Newton version of our method. Numerical experiments on sparse logistic regression and image compression illustrate the efficiency of the proposed algorithm.Comment: 56 page

    Complete Characterizations of Local Weak Sharp Minima With Applications to Semi-Infinite Optimization and Complementarity

    Get PDF
    In this paper we identify a favorable class of nonsmooth functions for which local weak sharp minima can be completely characterized in terms of normal cones and subdifferentials, or tangent cones and subderivatives, or their mixture in finite-dimensional spaces. The results obtained not only significantly extend previous ones in the literature, but also allow us to provide new types of criteria for local weak sharpness. Applications of the developed theory are given to semi-infinite programming and to semi-infinite complementarity problems

    A smoothing projected Newton-type algorithm for semi-infinite programming

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journa

    Favorable Classes of Lipschitz Continuous Functions in Subgradient Optimization

    Get PDF
    Clarke has given a robust definition of subgradients of arbitrary Lipschitz continuous functions f on R^n, but for purposes of minimization algorithms it seems essential that the subgradient multifunction partial f have additional properties, such as certain special kinds of semicontinuity, which are not automatic consequences of f being Lipschitz continuous. This paper explores properties of partial f that correspond to f being subdifferentially regular, another concept of Clarke's, and to f being a pointwise supremum of functions that are k times continuously differentiable

    Truncated Nonsmooth Newton Multigrid for phase-field brittle-fracture problems

    Get PDF
    We propose the Truncated Nonsmooth Newton Multigrid Method (TNNMG) as a solver for the spatial problems of the small-strain brittle-fracture phase-field equations. TNNMG is a nonsmooth multigrid method that can solve biconvex, block-separably nonsmooth minimization problems in roughly the time of solving one linear system of equations. It exploits the variational structure inherent in the problem, and handles the pointwise irreversibility constraint on the damage variable directly, without penalization or the introduction of a local history field. Memory consumption is significantly lower compared to approaches based on direct solvers. In the paper we introduce the method and show how it can be applied to several established models of phase-field brittle fracture. We then prove convergence of the solver to a solution of the nonsmooth Euler-Lagrange equations of the spatial problem for any load and initial iterate. Numerical comparisons to an operator-splitting algorithm show a speed increase of more than one order of magnitude, without loss of robustness

    Nonsmooth dynamic optimization of systems with varying structure

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 357-365).In this thesis, an open-loop numerical dynamic optimization method for a class of dynamic systems is developed. The structure of the governing equations of the systems under consideration change depending on the values of the states, parameters and the controls. Therefore, these systems are called systems with varying structure. Such systems occur frequently in the models of electric and hydraulic circuits, chemical processes, biological networks and machinery. As a result, the determination of parameters and controls resulting in the optimal performance of these systems has been an important research topic. Unlike dynamic optimization problems where the structure of the underlying system is constant, the dynamic optimization of systems with varying structure requires the determination of the optimal evolution of the system structure in time in addition to optimal parameters and controls. The underlying varying structure results in nonsmooth and discontinuous optimization problems. The nonsmooth single shooting method introduced in this thesis uses concepts from nonsmooth analysis and nonsmooth optimization to solve dynamic optimization problems involving systems with varying structure whose dynamics can be described by locally Lipschitz continuous ordinary or differential-algebraic equations. The method converts the infinitedimensional dynamic optimization problem into an nonlinear program by parameterizing the controls. Unlike the state of the art, the method does not enumerate possible structures explicitly in the optimization and it does not depend on the discretization of the dynamics. Instead, it uses a special integration algorithm to compute state trajectories and derivative information. As a result, the method produces more accurate solutions to problems where the underlying dynamics is highly nonlinear and/or stiff for less effort than the state of the art. The thesis develops substitutes for the gradient and the Jacobian of a function in case these quantities do not exist. These substitutes are set-valued maps and an elements of these maps need to be computed for optimization purposes. Differential equations are derived whose solutions furnish the necessary elements. These differential equations have discontinuities in time. A numerical method for their solution is proposed based on state event location algorithms that detects these discontinuities. Necessary conditions of optimality for nonlinear programs are derived using these substitutes and it is shown that nonsmooth optimization methods called bundle methods can be used to obtain solutions satisfying these necessary conditions. Case studies compare the method to the state of the art and investigate its complexity empirically.by Mehmet Yunt.Ph.D

    Matrix convex functions with applications to weighted centers for semidefinite programming

    Get PDF
    In this paper, we develop various calculus rules for general smooth matrix-valued functions and for the class of matrix convex (or concave) functions first introduced by Loewner and Kraus in 1930s. Then we use these calculus rules and the matrix convex function -log X to study a new notion of weighted convex centers for semidefinite programming (SDP) and show that, with this definition, some known properties of weighted centers for linear programming can be extended to SDP. We also show how the calculus rules for matrix convex functions can be used in the implementation of barrier methods for optimization problems involving nonlinear matrix functions

    Nonmonotone globalization for Anderson acceleration via adaptive regularization

    Get PDF
    Anderson acceleration (AA) is a popular method for accelerating fixed-point iterations, but may suffer from instability and stagnation. We propose a globalization method for AA to improve stability and achieve unified global and local convergence. Unlike existing AA globalization approaches that rely on safeguarding operations and might hinder fast local convergence, we adopt a nonmonotone trust-region framework and introduce an adaptive quadratic regularization together with a tailored acceptance mechanism. We prove global convergence and show that our algorithm attains the same local convergence as AA under appropriate assumptions. The effectiveness of our method is demonstrated in several numerical experiments

    Numerical Analysis of Algorithms for Infinitesimal Associated and Non-Associated Elasto-Plasticity

    Get PDF
    The thesis studies nonlinear solution algorithms for problems in infinitesimal elastoplasticity and their numerical realization within a parallel computing framework. New algorithms like Active Set and Augmented Lagrangian methods are proposed and analyzed within a semismooth Newton setting. The analysis is often carried out in function space which results in stable algorithms. Large scale computer experiments demonstrate the efficiency of the new algorithms
    corecore