444 research outputs found

    Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

    Full text link
    The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problem's subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call "zero-convexity". This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio

    Catalyst Acceleration for Gradient-Based Non-Convex Optimization

    Get PDF
    We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions. Even though these methods may originally require convexity to operate, the proposed approach allows one to use them on weakly convex objectives, which covers a large class of non-convex functions typically appearing in machine learning and signal processing. In general, the scheme is guaranteed to produce a stationary point with a worst-case efficiency typical of first-order methods, and when the objective turns out to be convex, it automatically accelerates in the sense of Nesterov and achieves near-optimal convergence rate in function values. These properties are achieved without assuming any knowledge about the convexity of the objective, by automatically adapting to the unknown weak convexity constant. We conclude the paper by showing promising experimental results obtained by applying our approach to incremental algorithms such as SVRG and SAGA for sparse matrix factorization and for learning neural networks
    corecore