83 research outputs found

    Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization

    Full text link
    This paper proposes and justifies two globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and generalized differentiation. Both methods are coderivative-based and employ generalized Hessians (coderivatives of subgradient mappings) associated with objective functions, which are either of class C1,1\mathcal{C}^{1,1}, or are represented in the form of convex composite optimization, where one of the terms may be extended-real-valued. The proposed globally convergent algorithms are of two types. The first one extends the damped Newton method and requires positive-definiteness of the generalized Hessians for its well-posedness and efficient performance, while the other algorithm is of {the regularized Newton type} being well-defined when the generalized Hessians are merely positive-semidefinite. The obtained convergence rates for both methods are at least linear, but become superlinear under the semismooth^* property of subgradient mappings. Problems of convex composite optimization are investigated with and without the strong convexity assumption {on smooth parts} of objective functions by implementing the machinery of forward-backward envelopes. Numerical experiments are conducted for Lasso problems and for box constrained quadratic programs with providing performance comparisons of the new algorithms and some other first-order and second-order methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055

    First-order conditions for the optimal control of learning-informed nonsmooth PDEs

    Get PDF
    In this paper we study the optimal control of a class of semilinear elliptic partial differential equations which have nonlinear constituents that are only accessible by data and are approximated by nonsmooth ReLU neural networks. The optimal control problem is studied in detail. In particular, the existence and uniqueness of the state equation are shown, and continuity as well as directional differentiability properties of the corresponding control-to-state map are established. Based on approximation capabilities of the pertinent networks, we address fundamental questions regarding approximating properties of the learning-informed control-to-state map and the solution of the corresponding optimal control problem. Finally, several stationarity conditions are derived based on different notions of generalized differentiability

    Optimal control of geometric partial differential equations

    Get PDF
    Optimal control problems for geometric (evolutionary) partial differential inclusions are considered. The focus is on problems which, in addition to the nonlinearity due to geometric evolution, contain optimization theoretic challenges because of non-smoothness. The latter might stem from energies containing non-smooth constituents such as obstacle-type potentials or terms modeling, e.g., pinning phenomena in microfluidics. Several techniques to remedy the resulting constraint degeneracy when deriving stationarity conditions are presented. A particular focus is on Yosida-type mollifications approximating the original degenerate problem by a sequence of nondegenerate nonconvex optimal control problems. This technique is also the starting point for the development of numerical solution schemes. In this context, also dual-weighted residual based error estimates are addressed to facilitate an adaptive mesh refinement. Concerning the underlying state model, sharp and diffuse interface formulations are discussed. While the former always allows for accurately tracing interfacial motion, the latter model may be dictated by the underlying physical phenomenon, where near the interface mixed phases may exist, but it may also be used as an approximate model for (sharp) interface motion. In view of the latter, (sharp interface) limits of diffuse interface models are addressed. For the sake of presentation, this exposition confines itself to phase field type diffuse interface models and, moreover, develops the optimal control of either of the two interface models along model applications. More precisely, electro-wetting on dielectric is used in the sharp interface context, and the control of multiphase fluids involving spinodal decomposition highlights the phase field technique. Mathematically, the former leads to a Hele-Shaw flow with geometric boundary conditions involving a complementarity system due to contact line pinning, and the latter gives rise to a Cahn-Hilliard Navier-Stokes model including a non-smooth obstacle type potential leading to a variational inequality constraint

    Optimal control of geometric partial differential equations

    Get PDF
    Optimal control problems for geometric (evolutionary) partial differential inclusions are considered. The focus is on problems which, in addition to the nonlinearity due to geometric evolution, contain optimization theoretic challenges because of non-smoothness. The latter might stem from energies containing non-smooth constituents such as obstacle-type potentials or terms modeling, e.g., pinning phenomena in microfluidics. Several techniques to remedy the resulting constraint degeneracy when deriving stationarity conditions are presented. A particular focus is on Yosida-type mollifications approximating the original degenerate problem by a sequence of nondegenerate nonconvex optimal control problems. This technique is also the starting point for the development of numerical solution schemes. In this context, also dual-weighted residual based error estimates are addressed to facilitate an adaptive mesh refinement. Concerning the underlying state model, sharp and diffuse interface formulations are discussed. While the former always allows for accurately tracing interfacial motion, the latter model may be dictated by the underlying physical phenomenon, where near the interface mixed phases may exist, but it may also be used as an approximate model for (sharp) interface motion. In view of the latter, (sharp interface) limits of diffuse interface models are addressed. For the sake of presentation, this exposition confines itself to phase field type diffuse interface models and, moreover, develops the optimal control of either of the two interface models along model applications. More precisely, electro-wetting on dielectric is used in the sharp interface context, and the control of multiphase fluids involving spinodal decomposition highlights the phase field technique. Mathematically, the former leads to a Hele-Shaw flow with geometric boundary conditions involving a complementarity system due to contact line pinning, and the latter gives rise to a Cahn-Hilliard Navier-Stokes model including a non-smooth obstacle type potential leading to a variational inequality constraint

    Let's Make Block Coordinate Descent Go Fast: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence

    Full text link
    Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper we explore all three of these building blocks and propose variations for each that can lead to significantly faster BCD methods. We (i) propose new greedy block-selection strategies that guarantee more progress per iteration than the Gauss-Southwell rule; (ii) explore practical issues like how to implement the new rules when using "variable" blocks; (iii) explore the use of message-passing to compute matrix or Newton updates efficiently on huge blocks for problems with a sparse dependency between variables; and (iv) consider optimal active manifold identification, which leads to bounds on the "active set complexity" of BCD methods and leads to superlinear convergence for certain problems with sparse solutions (and in some cases finite termination at an optimal solution). We support all of our findings with numerical results for the classic machine learning problems of least squares, logistic regression, multi-class logistic regression, label propagation, and L1-regularization
    corecore