3 research outputs found

    Primal-Dual Gradient Flow Algorithm for Distributed Support Vector Machines

    Full text link
    In this paper, a primal-dual gradient flow algorithm for distributed support vector machines (DSVM) is proposed. A network of computing nodes, each carrying a subset of horizontally partitioned large dataset is considered. The nodes are represented as dynamical systems with Arrow-Hurwicz-Uzawa gradient flow dynamics, derived from the Lagrangian function of the DSVM problem. It is first proved that the nodes are passive dynamical systems. Then, by employing the Krasovskii type candidate Lyapunov functions, it is proved that the computing nodes asymptotically converge to the optimal primal-dual solution

    Exponential Stability of Primal-Dual Gradient Dynamics with Non-Strong Convexity

    Full text link
    This paper studies the exponential stability of primal-dual gradient dynamics (PDGD) for solving convex optimization problems where constraints are in the form of Ax+By= d and the objective is min f(x)+g(y) with strongly convex smooth f but only convex smooth g. We show that when g is a quadratic function or when g and matrix B together satisfy an inequality condition, the PDGD can achieve global exponential stability given that matrix A is of full row rank. These results indicate that the PDGD is locally exponentially stable with respect to any convex smooth g under a regularity condition. To prove the exponential stability, two quadratic Lyapunov functions are designed. Lastly, numerical experiments further complement the theoretical analysis.Comment: 8 page

    Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach

    Full text link
    For a class of nonsmooth composite optimization problems with linear equality constraints, we utilize a Lyapunov-based approach to establish the global exponential stability of the primal-dual gradient flow dynamics based on the proximal augmented Lagrangian. The result holds when the differentiable part of the objective function is strongly convex with a Lipschitz continuous gradient; the non-differentiable part is proper, lower semi-continuous, and convex; and the matrix in the linear constraint is full row rank. Our quadratic Lyapunov function generalizes recent result from strongly convex problems with either affine equality or inequality constraints to a broader class of composite optimization problems with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate. Finally, we use computational experiments to demonstrate that our convergence rate estimate is less conservative than the existing alternatives.Comment: 6 pages, 3 figure
    corecore