8 research outputs found
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth
optimization problems in composite form. The algorithms are based on a a
reformulation of the original nonsmooth problem as the unconstrained
minimization of a continuously differentiable function, namely the
forward-backward envelope (FBE). The first algorithm is based on a standard
line search strategy, whereas the second one combines the global efficiency
estimates of the corresponding first-order methods, while achieving fast
asymptotic convergence rates. Furthermore, they are computationally attractive
since each Newton iteration requires the approximate solution of a linear
system of usually small dimension
A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction
We develop an implementable stochastic proximal point (SPP) method for a
class of weakly convex, composite optimization problems. The proposed
stochastic proximal point algorithm incorporates a variance reduction mechanism
and the resulting SPP updates are solved using an inexact semismooth Newton
framework. We establish detailed convergence results that take the inexactness
of the SPP steps into account and that are in accordance with existing
convergence guarantees of (proximal) stochastic variance-reduced gradient
methods. Numerical experiments show that the proposed algorithm competes
favorably with other state-of-the-art methods and achieves higher robustness
with respect to the step size selection
Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit
Algorithms based on the hard thresholding principle have been well studied
with sounding theoretical guarantees in the compressed sensing and more general
sparsity-constrained optimization. It is widely observed in existing empirical
studies that when a restricted Newton step was used (as the debiasing step),
the hard-thresholding algorithms tend to meet halting conditions in a
significantly low number of iterations and are very efficient. Hence, the thus
obtained Newton hard-thresholding algorithms call for stronger theoretical
guarantees than for their simple hard-thresholding counterparts. This paper
provides a theoretical justification for the use of the restricted Newton step.
We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for
the sparsity-constrained optimization. Our main result shows that NHTP is
quadratically convergent under the standard assumption of restricted strong
convexity and smoothness. We also establish its global convergence to a
stationary point under a weaker assumption. In the special case of the
compressive sensing, NHTP effectively reduces to some of the existing
hard-thresholding algorithms with a Newton step. Consequently, our fast
convergence result justifies why those algorithms perform better than without
the Newton step. The efficiency of NHTP was demonstrated on both synthetic and
real data in compressed sensing and sparse logistic regression
Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization
This paper proposes and justifies two globally convergent Newton-type methods
to solve unconstrained and constrained problems of nonsmooth optimization by
using tools of variational analysis and generalized differentiation. Both
methods are coderivative-based and employ generalized Hessians (coderivatives
of subgradient mappings) associated with objective functions, which are either
of class , or are represented in the form of convex
composite optimization, where one of the terms may be extended-real-valued. The
proposed globally convergent algorithms are of two types. The first one extends
the damped Newton method and requires positive-definiteness of the generalized
Hessians for its well-posedness and efficient performance, while the other
algorithm is of {the regularized Newton type} being well-defined when the
generalized Hessians are merely positive-semidefinite. The obtained convergence
rates for both methods are at least linear, but become superlinear under the
semismooth property of subgradient mappings. Problems of convex composite
optimization are investigated with and without the strong convexity assumption
{on smooth parts} of objective functions by implementing the machinery of
forward-backward envelopes. Numerical experiments are conducted for Lasso
problems and for box constrained quadratic programs with providing performance
comparisons of the new algorithms and some other first-order and second-order
methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055
Numerical Analysis of Algorithms for Infinitesimal Associated and Non-Associated Elasto-Plasticity
The thesis studies nonlinear solution algorithms for problems in
infinitesimal elastoplasticity and their numerical realization within
a parallel computing framework. New algorithms like Active Set and
Augmented Lagrangian methods are proposed and analyzed within a
semismooth Newton setting. The analysis is often carried out in
function space which results in stable algorithms. Large scale
computer experiments demonstrate the efficiency of the new algorithms
Global and quadratic convergence of Newton hard-thresholding pursuit
Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization. It is widely observed in existing empirical studies that when a restricted Newton step was used (as the debiasing step), the hard-thresholding algorithms tend to meet halting conditions in a significantly low number of iterations and are very efficient. Hence, the thus obtained Newton hard-thresholding algorithms call for stronger theoretical guarantees than for their simple hard-thresholding counterparts. This paper provides a theoretical justification for the use of the restricted Newton step. We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for the sparsity-constrained optimization. Our main result shows that NHTP is quadratically convergent under the standard assumption of restricted strong convexity and smoothness. We also establish its global convergence to a stationary point under a weaker assumption. In the special case of the compressive sensing, NHTP effectively reduces to some of the existing hard-thresholding algorithms with a Newton step. Consequently, our fast convergence result justifies why those algorithms perform better than without the Newton step. The efficiency of NHTP was demonstrated on both synthetic and real data in compressed sensing and sparse logistic regression