161 research outputs found

    Constrained Consensus

    Full text link
    We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.Comment: 35 pages. Included additional results, removed two subsections, added references, fixed typo

    Dynamic Coalitional TU Games: Distributed Bargaining among Players' Neighbors

    Get PDF
    We consider a sequence of transferable utility (TU) games where, at each time, the characteristic function is a random vector with realizations restricted to some set of values. The game differs from other ones in the literature on dynamic, stochastic or interval valued TU games as it combines dynamics of the game with an allocation protocol for the players that dynamically interact with each other. The protocol is an iterative and decentralized algorithm that offers a paradigmatic mathematical description of negotiation and bargaining processes. The first part of the paper contributes to the definition of a robust (coalitional) TU game and the development of a distributed bargaining protocol. We prove the convergence with probability 1 of the bargaining process to a random allocation that lies in the core of the robust game under some mild conditions on the underlying communication graphs. The second part of the paper addresses the more general case where the robust game may have empty core. In this case, with the dynamic game we associate a dynamic average game by averaging over time the sequence of characteristic functions. Then, we consider an accordingly modified bargaining protocol. Assuming that the sequence of characteristic functions is ergodic and the core of the average game has a nonempty relative interior, we show that the modified bargaining protocol converges with probability 1 to a random allocation that lies in the core of the average game

    Nonasymptotic Convergence Rates for Cooperative Learning Over Time-Varying Directed Graphs

    Full text link
    We study the problem of distributed hypothesis testing with a network of agents where some agents repeatedly gain access to information about the correct hypothesis. The group objective is to globally agree on a joint hypothesis that best describes the observed data at all the nodes. We assume that the agents can interact with their neighbors in an unknown sequence of time-varying directed graphs. Following the pioneering work of Jadbabaie, Molavi, Sandroni, and Tahbaz-Salehi, we propose local learning dynamics which combine Bayesian updates at each node with a local aggregation rule of private agent signals. We show that these learning dynamics drive all agents to the set of hypotheses which best explain the data collected at all nodes as long as the sequence of interconnection graphs is uniformly strongly connected. Our main result establishes a non-asymptotic, explicit, geometric convergence rate for the learning dynamic

    Abrasive Wear Resistance of the Iron- and WC-based Hardfaced Coatings Evaluated with Scratch Test Method

    Get PDF
    Abrasive wear is one of the most common types of wear, which makesabrasive wear resistance very important in many industries. Thehard facing is considered as useful and economical way to improve theperformance of components submitted to severe abrasive wear conditions, with wide range of applicable filler materials. The abrasive wear resistance of the three different hardfaced coatings (two ironā€based and one WCā€based), which were intended to be used for reparation of the impact plates of the ventilation mill, was investigated and compared. Abrasive wear tests were carriedā€out by using the scratch tester under the dry conditions. Three normal loads of 10, 50 and 100 N and the constant sliding speed of 4 mm/s were used. Scratch test was chosen as a relatively easy and quick test method. Wear mechanism analysis showed significant influence of the hardfaced coatings structure, which, along with hardness, has determined coatings abrasive wear resistance
    • ā€¦
    corecore