1 research outputs found
A derivative-free -algorithm for convex finite-max problems
The -algorithm is a superlinearly convergent method for
minimizing nonsmooth, convex functions. At each iteration, the algorithm works
with a certain -space and its orthogonal \U-space, such that the
nonsmoothness of the objective function is concentrated on its projection onto
the -space, and on the -space the projection is
smooth. This structure allows for an alternation between a Newton-like step
where the function is smooth, and a proximal-point step that is used to find
iterates with promising -decompositions. We establish a
derivative-free variant of the -algorithm for convex finite-max
objective functions. We show global convergence and provide numerical results
from a proof-of-concept implementation, which demonstrates the feasibility and
practical value of the approach. We also carry out some tests using nonconvex
functions and discuss the results