1 research outputs found

    A derivative-free VU\mathcal{VU}-algorithm for convex finite-max problems

    Full text link
    The VU\mathcal{VU}-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain V\mathcal{V}-space and its orthogonal \U-space, such that the nonsmoothness of the objective function is concentrated on its projection onto the V\mathcal{V}-space, and on the U\mathcal{U}-space the projection is smooth. This structure allows for an alternation between a Newton-like step where the function is smooth, and a proximal-point step that is used to find iterates with promising VU\mathcal{VU}-decompositions. We establish a derivative-free variant of the VU\mathcal{VU}-algorithm for convex finite-max objective functions. We show global convergence and provide numerical results from a proof-of-concept implementation, which demonstrates the feasibility and practical value of the approach. We also carry out some tests using nonconvex functions and discuss the results
    corecore