136 research outputs found
A descent subgradient method using Mifflin line search for nonsmooth nonconvex optimization
We propose a descent subgradient algorithm for minimizing a real function,
assumed to be locally Lipschitz, but not necessarily smooth or convex. To find
an effective descent direction, the Goldstein subdifferential is approximated
through an iterative process. The method enjoys a new two-point variant of
Mifflin line search in which the subgradients are arbitrary. Thus, the line
search procedure is easy to implement. Moreover, in comparison to bundle
methods, the quadratic subproblems have a simple structure, and to handle
nonconvexity the proposed method requires no algorithmic modification. We study
the global convergence of the method and prove that any accumulation point of
the generated sequence is Clarke stationary, assuming that the objective is
weakly upper semismooth. We illustrate the efficiency and effectiveness of the
proposed algorithm on a collection of academic and semi-academic test problems
First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points
Constrained optimization problems where both the objective and constraints
may be nonsmooth and nonconvex arise across many learning and data science
settings. In this paper, we show a simple first-order method finds a feasible,
-stationary point at a convergence rate of without
relying on compactness or Constraint Qualification (CQ). When CQ holds, this
convergence is measured by approximately satisfying the Karush-Kuhn-Tucker
conditions. When CQ fails, we guarantee the attainment of weaker Fritz-John
conditions. As an illustrative example, our method stably converges on
piecewise quadratic SCAD regularized problems despite frequent violations of
constraint qualification. The considered algorithm is similar to those of
"Quadratically regularized subgradient methods for weakly convex optimization
with weakly convex constraints" by Ma et al. and "Stochastic first-order
methods for convex and nonconvex functional constrained optimization" by Boob
et al. (whose guarantees further assume compactness and CQ), iteratively taking
inexact proximal steps, computed via an inner loop applying a switching
subgradient method to a strongly convex constrained subproblem. Our
non-Lipschitz analysis of the switching subgradient method appears to be new
and may be of independent interest
Optimizing condition numbers
In this paper we study the problem of minimizing condition numbers over a compact convex subset of the cone of symmetric positive semidefinite matrices. We show that the condition number is a Clarke regular strongly pseudoconvex function. We prove that a global solution of the problem can be approximated by an exact or an inexact solution of a nonsmooth convex program. This asymptotic analysis provides a valuable tool for designing an implementable algorithm for solving the problem of minimizing condition numbers
Algorithms for Difference-of-Convex (DC) Programs Based on Difference-of-Moreau-Envelopes Smoothing
In this paper we consider minimization of a difference-of-convex (DC)
function with and without linear constraints. We first study a smooth
approximation of a generic DC function, termed difference-of-Moreau-envelopes
(DME) smoothing, where both components of the DC function are replaced by their
respective Moreau envelopes. The resulting smooth approximation is shown to be
Lipschitz differentiable, capture stationary points, local, and global minima
of the original DC function, and enjoy some growth conditions, such as
level-boundedness and coercivity, for broad classes of DC functions. We then
develop four algorithms for solving DC programs with and without linear
constraints based on the DME smoothing. In particular, for a smoothed DC
program without linear constraints, we show that the classic gradient descent
method as well as an inexact variant can obtain a stationary solution in the
limit with a convergence rate of , where is the
number of proximal evaluations of both components. Furthermore, when the DC
program is explicitly constrained in an affine subspace, we combine the
smoothing technique with the augmented Lagrangian function and derive two
variants of the augmented Lagrangian method (ALM), named LCDC-ALM and composite
LCDC-ALM, focusing on different structures of the DC objective function. We
show that both algorithms find an -approximate stationary solution of
the original DC program in iterations. Comparing
to existing methods designed for linearly constrained weakly convex
minimization, the proposed ALM-based algorithms can be applied to a broader
class of problems, where the objective contains a nonsmooth concave component.
Finally, numerical experiments are presented to demonstrate the performance of
the proposed algorithms
An adaptive sampling sequential quadratic programming method for nonsmooth stochastic optimization with upper- objective
We propose an optimization algorithm that incorporates adaptive sampling for
stochastic nonsmooth nonconvex optimization problems with upper-
objective functions. Upper- is a weakly concave property that
exists naturally in many applications, particularly certain classes of
solutions to parametric optimization problems, e.g., recourse of stochastic
programming and projection into closed sets. Our algorithm is a stochastic
sequential quadratic programming (SQP) method extended to nonsmooth problems
with upper objectives and is globally convergent in expectation
with bounded algorithmic parameters. The capabilities of our algorithm are
demonstrated by solving a joint production, pricing and shipment problem, as
well as a realistic optimal power flow problem as used in current power grid
industry practice.Comment: arXiv admin note: text overlap with arXiv:2204.0963
- ā¦