3 research outputs found

    Iteration Complexity of Randomized Primal-Dual Methods for Convex-Concave Saddle Point Problems

    Full text link
    In this paper we propose a class of randomized primal-dual methods to contend with large-scale saddle point problems defined by a convex-concave function L(x,y)β‰œβˆ‘i=1mfi(xi)+Ξ¦(x,y)βˆ’h(y)\mathcal{L}(\mathbf{x},y)\triangleq\sum_{i=1}^m f_i(x_i)+\Phi(\mathbf{x},y)-h(y). We analyze the convergence rate of the proposed method under the settings of mere convexity and strong convexity in x\mathbf{x}-variable. In particular, assuming βˆ‡yΞ¦(β‹…,β‹…)\nabla_y\Phi(\cdot,\cdot) is Lipschitz and βˆ‡xΞ¦(β‹…,y)\nabla_\mathbf{x}\Phi(\cdot,y) is coordinate-wise Lipschitz for any fixed yy, the ergodic sequence generated by the algorithm achieves the convergence rate of O(m/k)\mathcal{O}(m/k) in a suitable error metric where mm denotes the number of coordinates for the primal variable. Furthermore, assuming that L(β‹…,y)\mathcal{L}(\cdot,y) is uniformly strongly convex for any yy, and that Ξ¦(β‹…,y)\Phi(\cdot,y) is linear in yy, the scheme displays convergence rate of O(m/k2)\mathcal{O}(m/k^2). We implemented the proposed algorithmic framework to solve kernel matrix learning problem, and tested it against other state-of-the-art solvers
    corecore