168 research outputs found

    Optimal Scaling of a Gradient Method for Distributed Resource Allocation

    Get PDF
    We consider a class of weighted gradient methods for distributed resource allocation over a network. Each node of the network is associated with a local variable and a convex cost function; the sum of the variables (resources) across the network is fixed. Starting with a feasible allocation, each node updates its local variable in proportion to the differences between the marginal costs of itself and its neighbors. We focus on how to choose the proportional weights on the edges (scaling factors for the gradient method) to make this distributed algorithm converge and on how to make the convergence as fast as possible. We give sufficient conditions on the edge weights for the algorithm to converge monotonically to the optimal solution; these conditions have the form of a linear matrix inequality. We give some simple, explicit methods to choose the weights that satisfy these conditions. We derive a guaranteed convergence rate for the algorithm and find the weights that minimize this rate by solving a semidefinite program. Finally, we extend the main results to problems with general equality constraints and problems with block separable objective function

    Minimizing Polarization and Disagreement in Social Networks

    Full text link
    The rise of social media and online social networks has been a disruptive force in society. Opinions are increasingly shaped by interactions on online social media, and social phenomena including disagreement and polarization are now tightly woven into everyday life. In this work we initiate the study of the following question: given nn agents, each with its own initial opinion that reflects its core value on a topic, and an opinion dynamics model, what is the structure of a social network that minimizes {\em polarization} and {\em disagreement} simultaneously? This question is central to recommender systems: should a recommender system prefer a link suggestion between two online users with similar mindsets in order to keep disagreement low, or between two users with different opinions in order to expose each to the other's viewpoint of the world, and decrease overall levels of polarization? Our contributions include a mathematical formalization of this question as an optimization problem and an exact, time-efficient algorithm. We also prove that there always exists a network with O(n/ϵ2)O(n/\epsilon^2) edges that is a (1+ϵ)(1+\epsilon) approximation to the optimum. For a fixed graph, we additionally show how to optimize our objective function over the agents' innate opinions in polynomial time. We perform an empirical study of our proposed methods on synthetic and real-world data that verify their value as mining tools to better understand the trade-off between of disagreement and polarization. We find that there is a lot of space to reduce both polarization and disagreement in real-world networks; for instance, on a Reddit network where users exchange comments on politics, our methods achieve a 60000\sim 60\,000-fold reduction in polarization and disagreement.Comment: 19 pages (accepted, WWW 2018

    A Connectedness Constraint for Learning Sparse Graphs

    Full text link
    Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.Comment: 5 pages, presented at the European Signal Processing Conference (EUSIPCO) 201

    Geometric Bounds on the Fastest Mixing Markov Chain

    Get PDF
    In the Fastest Mixing Markov Chain problem, we are given a graph G=(V,E)G = (V, E) and desire the discrete-time Markov chain with smallest mixing time τ\tau subject to having equilibrium distribution uniform on VV and non-zero transition probabilities only across edges of the graph. It is well-known that the mixing time τRW\tau_\textsf{RW} of the lazy random walk on GG is characterised by the edge conductance Φ\Phi of GG via Cheeger's inequality: Φ1τRWΦ2logV\Phi^{-1} \lesssim \tau_\textsf{RW} \lesssim \Phi^{-2} \log |V|. Analogously, we characterise the fastest mixing time τ\tau^\star via a Cheeger-type inequality but for a different geometric quantity, namely the vertex conductance Ψ\Psi of GG: Ψ1τΨ2(logV)2\Psi^{-1} \lesssim \tau^\star \lesssim \Psi^{-2} (\log |V|)^2. This characterisation forbids fast mixing for graphs with small vertex conductance. To bypass this fundamental barrier, we consider Markov chains on GG with equilibrium distribution which need not be uniform, but rather only ε\varepsilon-close to uniform in total variation. We show that it is always possible to construct such a chain with mixing time τε1(diamG)2logV\tau \lesssim \varepsilon^{-1} (\operatorname{diam} G)^2 \log |V|. Finally, we discuss analogous questions for continuous-time and time-inhomogeneous chains.Comment: 31 page
    corecore