25,816 research outputs found
An Inexact Successive Quadratic Approximation Method for Convex L-1 Regularized Optimization
We study a Newton-like method for the minimization of an objective function
that is the sum of a smooth convex function and an l-1 regularization term.
This method, which is sometimes referred to in the literature as a proximal
Newton method, computes a step by minimizing a piecewise quadratic model of the
objective function. In order to make this approach efficient in practice, it is
imperative to perform this inner minimization inexactly. In this paper, we give
inexactness conditions that guarantee global convergence and that can be used
to control the local rate of convergence of the iteration. Our inexactness
conditions are based on a semi-smooth function that represents a (continuous)
measure of the optimality conditions of the problem, and that embodies the
soft-thresholding iteration. We give careful consideration to the algorithm
employed for the inner minimization, and report numerical results on two test
sets originating in machine learning
Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
This paper treats the problem of minimizing a general continuously
differentiable function subject to sparsity constraints. We present and analyze
several different optimality criteria which are based on the notions of
stationarity and coordinate-wise optimality. These conditions are then used to
derive three numerical algorithms aimed at finding points satisfying the
resulting optimality criteria: the iterative hard thresholding method and the
greedy and partial sparse-simplex methods. The first algorithm is essentially a
gradient projection method while the remaining two algorithms are of coordinate
descent type. The theoretical convergence of these methods and their relations
to the derived optimality conditions are studied. The algorithms and results
are illustrated by several numerical examples.Comment: submitted to SIAM Optimizatio
- …