2,331 research outputs found

    On the Convergence of Decentralized Gradient Descent

    Full text link
    Consider the consensus problem of minimizing f(x)=βˆ‘i=1nfi(x)f(x)=\sum_{i=1}^n f_i(x) where each fif_i is only known to one individual agent ii out of a connected network of nn agents. All the agents shall collaboratively solve this problem and obtain the solution subject to data exchanges restricted to between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent ii updates its variable x(i)x_{(i)}, which is a local approximate to the unknown variable xx, by combining the average of its neighbors' with the negative gradient step βˆ’Ξ±βˆ‡fi(x(i))-\alpha \nabla f_i(x_{(i)}). The iteration is x(i)(k+1)β†βˆ‘neighborjofiwijx(j)(k)βˆ’Ξ±βˆ‡fi(x(i)(k)),forΒ eachΒ agenti,x_{(i)}(k+1) \gets \sum_{\text{neighbor} j \text{of} i} w_{ij} x_{(j)}(k) - \alpha \nabla f_i(x_{(i)}(k)),\quad\text{for each agent} i, where the averaging coefficients form a symmetric doubly stochastic matrix W=[wij]∈RnΓ—nW=[w_{ij}] \in \mathbb{R}^{n \times n}. We analyze the convergence of this iteration and derive its converge rate, assuming that each fif_i is proper closed convex and lower bounded, βˆ‡fi\nabla f_i is Lipschitz continuous with constant LfiL_{f_i}, and stepsize Ξ±\alpha is fixed. Provided that Ξ±<O(1/Lh)\alpha < O(1/L_h) where Lh=max⁑i{Lfi}L_h=\max_i\{L_{f_i}\}, the objective error at the averaged solution, f(1nβˆ‘ix(i)(k))βˆ’fβˆ—f(\frac{1}{n}\sum_i x_{(i)}(k))-f^*, reduces at a speed of O(1/k)O(1/k) until it reaches O(Ξ±)O(\alpha). If fif_i are further (restricted) strongly convex, then both 1nβˆ‘ix(i)(k)\frac{1}{n}\sum_i x_{(i)}(k) and each x(i)(k)x_{(i)}(k) converge to the global minimizer xβˆ—x^* at a linear rate until reaching an O(Ξ±)O(\alpha)-neighborhood of xβˆ—x^*. We also develop an iteration for decentralized basis pursuit and establish its linear convergence to an O(Ξ±)O(\alpha)-neighborhood of the true unknown sparse signal

    "Multiproduct Duopoly with Vertical Differentiation"

    Get PDF
    The paper investigates a two-stage competition in a vertical differentiated industry, where each firm produces an arbitrary number of similar qualities and sells them to heterogeneous consumers. We show that, when unit costs of quality are increasing and quadratic, each firm has an incentive to provide an interval of qualities. The finding is in sharp contrast to the single-quality outcome when the market coverage is exogenously determined. We also show that allowing for an interval of qualities intensifies competition, lowers the profits of each firm and raises the consumer surplus and the social welfare in comparison to the single-quality duopoly.

    On the Linear Convergence of the ADMM in Decentralized Consensus Optimization

    Full text link
    In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.Comment: 11 figures, IEEE Transactions on Signal Processing, 201
    • …
    corecore