This paper proposes and justifies two globally convergent Newton-type methods
to solve unconstrained and constrained problems of nonsmooth optimization by
using tools of variational analysis and generalized differentiation. Both
methods are coderivative-based and employ generalized Hessians (coderivatives
of subgradient mappings) associated with objective functions, which are either
of class C1,1, or are represented in the form of convex
composite optimization, where one of the terms may be extended-real-valued. The
proposed globally convergent algorithms are of two types. The first one extends
the damped Newton method and requires positive-definiteness of the generalized
Hessians for its well-posedness and efficient performance, while the other
algorithm is of {the regularized Newton type} being well-defined when the
generalized Hessians are merely positive-semidefinite. The obtained convergence
rates for both methods are at least linear, but become superlinear under the
semismoothβ property of subgradient mappings. Problems of convex composite
optimization are investigated with and without the strong convexity assumption
{on smooth parts} of objective functions by implementing the machinery of
forward-backward envelopes. Numerical experiments are conducted for Lasso
problems and for box constrained quadratic programs with providing performance
comparisons of the new algorithms and some other first-order and second-order
methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055