3,526 research outputs found

    Robust distributed linear programming

    Full text link
    This paper presents a robust, distributed algorithm to solve general linear programs. The algorithm design builds on the characterization of the solutions of the linear program as saddle points of a modified Lagrangian function. We show that the resulting continuous-time saddle-point algorithm is provably correct but, in general, not distributed because of a global parameter associated with the nonsmooth exact penalty function employed to encode the inequality constraints of the linear program. This motivates the design of a discontinuous saddle-point dynamics that, while enjoying the same convergence guarantees, is fully distributed and scalable with the dimension of the solution vector. We also characterize the robustness against disturbances and link failures of the proposed dynamics. Specifically, we show that it is integral-input-to-state stable but not input-to-state stable. The latter fact is a consequence of a more general result, that we also establish, which states that no algorithmic solution for linear programming is input-to-state stable when uncertainty in the problem data affects the dynamics as a disturbance. Our results allow us to establish the resilience of the proposed distributed dynamics to disturbances of finite variation and recurrently disconnected communication among the agents. Simulations in an optimal control application illustrate the results

    A Dynamical Approach to Convex Minimization Coupling Approximation with the Steepest Descent Method

    Get PDF
    AbstractWe study the asymptotic behavior of the solutions to evolution equations of the form 0∈u(t)+∂f(u(t), ε(t));  u(0)=u0, where {f(·, ε):ε>0} is a family of strictly convex functions whose minimum is attained at a unique pointx(ε). Assuming thatx(ε) converges to a pointx* as ε tends to 0, and depending on the behavior of the optimal trajectoryx(ε), we derive sufficient conditions on the parametrization ε(t) which ensure that the solutionu(t) of the evolution equation also converges tox* whent→+∞. The results are illustrated on three different penalty and viscosity-approximation methods for convex minimization

    Inertial game dynamics and applications to constrained optimization

    Get PDF
    Aiming to provide a new class of game dynamics with good long-term rationality properties, we derive a second-order inertial system that builds on the widely studied "heavy ball with friction" optimization method. By exploiting a well-known link between the replicator dynamics and the Shahshahani geometry on the space of mixed strategies, the dynamics are stated in a Riemannian geometric framework where trajectories are accelerated by the players' unilateral payoff gradients and they slow down near Nash equilibria. Surprisingly (and in stark contrast to another second-order variant of the replicator dynamics), the inertial replicator dynamics are not well-posed; on the other hand, it is possible to obtain a well-posed system by endowing the mixed strategy space with a different Hessian-Riemannian (HR) metric structure, and we characterize those HR geometries that do so. In the single-agent version of the dynamics (corresponding to constrained optimization over simplex-like objects), we show that regular maximum points of smooth functions attract all nearby solution orbits with low initial speed. More generally, we establish an inertial variant of the so-called "folk theorem" of evolutionary game theory and we show that strict equilibria are attracting in asymmetric (multi-population) games - provided of course that the dynamics are well-posed. A similar asymptotic stability result is obtained for evolutionarily stable strategies in symmetric (single- population) games.Comment: 30 pages, 4 figures; significantly revised paper structure and added new material on Euclidean embeddings and evolutionarily stable strategie
    corecore