13 research outputs found

    Efficient Bayesian Learning in Social Networks with Gaussian Estimators

    Get PDF
    We consider a group of Bayesian agents who try to estimate a state of the world θ\theta through interaction on a social network. Each agent vv initially receives a private measurement of θ\theta: a number SvS_v picked from a Gaussian distribution with mean θ\theta and standard deviation one. Then, in each discrete time iteration, each reveals its estimate of θ\theta to its neighbors, and, observing its neighbors' actions, updates its belief using Bayes' Law. This process aggregates information efficiently, in the sense that all the agents converge to the belief that they would have, had they access to all the private measurements. We show that this process is computationally efficient, so that each agent's calculation can be easily carried out. We also show that on any graph the process converges after at most 2N⋅D2N \cdot D steps, where NN is the number of agents and DD is the diameter of the network. Finally, we show that on trees and on distance transitive-graphs the process converges after DD steps, and that it preserves privacy, so that agents learn very little about the private signal of most other agents, despite the efficient aggregation of information. Our results extend those in an unpublished manuscript of the first and last authors.Comment: Added coauthor. Added proofs for fast convergence on trees and distance transitive graphs. Also, now analyzing a notion of privac

    Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging

    Full text link
    In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201

    Complexity of Bayesian Belief Exchange over a Network

    Get PDF
    Many important real-world decision making prob- lems involve group interactions among individuals with purely informational externalities, such situations arise for example in jury deliberations, expert committees, medical diagnosis, etc. In this paper, we will use the framework of iterated eliminations to model the decision problem as well as the thinking process of a Bayesian agent in a group decision/discussion scenario. We model the purely informational interactions of rational agents in a group, where they receive private information and act based upon that information while also observing other people’s beliefs. As the Bayesian agent attempts to infer the true state of the world from her sequence of observations which include her neighbors’ beliefs as well as her own private signal, she recursively refines her belief about the signals that other players could have observed and beliefs that they would have hold given the assumption that other players are also rational. We further analyze the computational complexity of the Bayesian belief formation in groups and show that it is NP -hard. We also investigate the factors underlying this computational complexity and show how belief calculations simplify in special network structures or cases with strong inherent symmetries. We finally give insights about the statistical efficiency (optimality) of the beliefs and its relations to computational efficiency.United States. Army Research Office (grant MURI W911NF-12- 1-0509)National Science Foundation (U.S.). Computing and Communication Foundation (grant CCF 1665252)United States. Department of Defense (ONR grant N00014-17-1-2598)National Science Foundation (U.S.) (grant DMS-1737944

    Bayesian Quadratic Network Game Filters

    Full text link
    A repeated network game where agents have quadratic utilities that depend on information externalities -- an unknown underlying state -- as well as payoff externalities -- the actions of all other agents in the network -- is considered. Agents play Bayesian Nash Equilibrium strategies with respect to their beliefs on the state of the world and the actions of all other nodes in the network. These beliefs are refined over subsequent stages based on the observed actions of neighboring peers. This paper introduces the Quadratic Network Game (QNG) filter that agents can run locally to update their beliefs, select corresponding optimal actions, and eventually learn a sufficient statistic of the network's state. The QNG filter is demonstrated on a Cournot market competition game and a coordination game to implement navigation of an autonomous team

    Opinion Exchange Dynamics

    Get PDF
    We survey a range of models of opinion exchange. From the introduction: "The exchange of opinions between individuals is a fundamental social interaction... Moreover, many models in this field are an excellent playground for mathematicians, especially those working in probability, algorithms and combinatorics. The goal of this survey is to introduce such models to mathematicians, and especially to those working in discrete mathematics, information theory, optimization, probability and statistics."Comment: 62 pages. arXiv admin note: substantial text overlap with arXiv:1207.589
    corecore