13,084 research outputs found

    Hypothesis Testing in Feedforward Networks with Broadcast Failures

    Full text link
    Consider a countably infinite set of nodes, which sequentially make decisions between two given hypotheses. Each node takes a measurement of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in whether there exists a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We also derive the convergence rate of the error probability. In case 2, we show that if each node learns from all of its previous predecessors, then the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities

    Opinion fluctuations and disagreement in social networks

    Get PDF
    We study a tractable opinion dynamics model that generates long-run disagreements and persistent opinion fluctuations. Our model involves an inhomogeneous stochastic gossip process of continuous opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and stubborn agents, who never update their opinions. When the society contains stubborn agents with different opinions, the belief dynamics never lead to a consensus (among the regular agents). Instead, beliefs in the society fail to converge almost surely, the belief profile keeps on fluctuating in an ergodic fashion, and it converges in law to a non-degenerate random vector. The structure of the network and the location of the stubborn agents within it shape the opinion dynamics. The expected belief vector evolves according to an ordinary differential equation coinciding with the Kolmogorov backward equation of a continuous-time Markov chain with absorbing states corresponding to the stubborn agents and converges to a harmonic vector, with every regular agent's value being the weighted average of its neighbors' values, and boundary conditions corresponding to the stubborn agents'. Expected cross-products of the agents' beliefs allow for a similar characterization in terms of coupled Markov chains on the network. We prove that, in large-scale societies which are highly fluid, meaning that the product of the mixing time of the Markov chain on the graph describing the social network and the relative size of the linkages to stubborn agents vanishes as the population size grows large, a condition of \emph{homogeneous influence} emerges, whereby the stationary beliefs' marginal distributions of most of the regular agents have approximately equal first and second moments.Comment: 33 pages, accepted for publication in Mathematics of Operation Researc

    Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging

    Full text link
    In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201
    • …
    corecore