219 research outputs found

    Constrained Consensus

    Full text link
    We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.Comment: 35 pages. Included additional results, removed two subsections, added references, fixed typo

    Fast Convergence Rates for Distributed Non-Bayesian Learning

    Full text link
    We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a non-asymptotic, explicit and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network

    Dynamic Coalitional TU Games: Distributed Bargaining among Players' Neighbors

    Get PDF
    We consider a sequence of transferable utility (TU) games where, at each time, the characteristic function is a random vector with realizations restricted to some set of values. The game differs from other ones in the literature on dynamic, stochastic or interval valued TU games as it combines dynamics of the game with an allocation protocol for the players that dynamically interact with each other. The protocol is an iterative and decentralized algorithm that offers a paradigmatic mathematical description of negotiation and bargaining processes. The first part of the paper contributes to the definition of a robust (coalitional) TU game and the development of a distributed bargaining protocol. We prove the convergence with probability 1 of the bargaining process to a random allocation that lies in the core of the robust game under some mild conditions on the underlying communication graphs. The second part of the paper addresses the more general case where the robust game may have empty core. In this case, with the dynamic game we associate a dynamic average game by averaging over time the sequence of characteristic functions. Then, we consider an accordingly modified bargaining protocol. Assuming that the sequence of characteristic functions is ergodic and the core of the average game has a nonempty relative interior, we show that the modified bargaining protocol converges with probability 1 to a random allocation that lies in the core of the average game

    Abrasive Wear Resistance of the Iron- and WC-based Hardfaced Coatings Evaluated with Scratch Test Method

    Get PDF
    Abrasive wear is one of the most common types of wear, which makesabrasive wear resistance very important in many industries. Thehard facing is considered as useful and economical way to improve theperformance of components submitted to severe abrasive wear conditions, with wide range of applicable filler materials. The abrasive wear resistance of the three different hardfaced coatings (two iron‐based and one WC‐based), which were intended to be used for reparation of the impact plates of the ventilation mill, was investigated and compared. Abrasive wear tests were carried‐out by using the scratch tester under the dry conditions. Three normal loads of 10, 50 and 100 N and the constant sliding speed of 4 mm/s were used. Scratch test was chosen as a relatively easy and quick test method. Wear mechanism analysis showed significant influence of the hardfaced coatings structure, which, along with hardness, has determined coatings abrasive wear resistance

    Information Leakage Games

    Full text link
    We consider a game-theoretic setting to model the interplay between attacker and defender in the context of information flow, and to reason about their optimal strategies. In contrast with standard game theory, in our games the utility of a mixed strategy is a convex function of the distribution on the defender's pure actions, rather than the expected value of their utilities. Nevertheless, the important properties of game theory, notably the existence of a Nash equilibrium, still hold for our (zero-sum) leakage games, and we provide algorithms to compute the corresponding optimal strategies. As typical in (simultaneous) game theory, the optimal strategy is usually mixed, i.e., probabilistic, for both the attacker and the defender. From the point of view of information flow, this was to be expected in the case of the defender, since it is well known that randomization at the level of the system design may help to reduce information leaks. Regarding the attacker, however, this seems the first work (w.r.t. the literature in information flow) proving formally that in certain cases the optimal attack strategy is necessarily probabilistic

    Incremental proximal methods for large scale convex optimization

    Get PDF
    Laboratory for Information and Decision Systems Report LIDS-P-2847We consider the minimization of a sum∑m [over]i=1 fi (x) consisting of a large number of convex component functions fi . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components.We also discuss applications in a few contexts, including signal processing and inference/machine learning.United States. Air Force Office of Scientific Research (grant FA9550-10-1-0412
    corecore