18 research outputs found

    Coordinate-Update Algorithms can Efficiently Detect Infeasible Optimization Problems

    Full text link
    Coordinate update/descent algorithms are widely used in large-scale optimization due to their low per-iteration cost and scalability, but their behavior on infeasible or misspecified problems has not been much studied compared to the algorithms that use full updates. For coordinate-update methods to be as widely adopted to the extent so that they can be used as engines of general-purpose solvers, it is necessary to also understand their behavior under pathological problem instances. In this work, we show that the normalized iterates of randomized coordinate-update fixed-point iterations (RC-FPI) converge to the infimal displacement vector and use this result to design an efficient infeasibility detection method. We then extend the analysis to the setup where the coordinates are defined by non-orthonormal basis using the Friedrichs angle and then apply the machinery to decentralized optimization problems

    Accelerated Infeasibility Detection of Constrained Optimization and Fixed-Point Iterations

    Full text link
    As first-order optimization methods become the method of choice for solving large-scale optimization problems, optimization solvers based on first-order algorithms are being built. Such general-purpose solvers must robustly detect infeasible or misspecified problem instances, but the computational complexity of first-order methods for doing so has yet to be formally studied. In this work, we characterize the optimal accelerated rate of infeasibility detection. We show that the standard fixed-point iteration achieves a O(1/k2)\mathcal{O}(1/k^2) and O(1/k)\mathcal{O}(1/k) rates, respectively, on the normalized iterates and the fixed-point residual converging to the infimal displacement vector, while the accelerated fixed-point iteration achieves O(1/k2)\mathcal{O}(1/k^2) and O~(1/k2)\tilde{\mathcal{O}}(1/k^2) rates. We then provide a matching complexity lower bound to establish that Θ(1/k2)\Theta(1/k^2) is indeed the optimal accelerated rate
    corecore