7,326 research outputs found

    On Non-Bayesian Social Learning

    Full text link
    We study a model of information aggregation and social learning recently proposed by Jadbabaie, Sandroni, and Tahbaz-Salehi, in which individual agents try to learn a correct state of the world by iteratively updating their beliefs using private observations and beliefs of their neighbors. No individual agent's private signal might be informative enough to reveal the unknown state. As a result, agents share their beliefs with others in their social neighborhood to learn from each other. At every time step each agent receives a private signal, and computes a Bayesian posterior as an intermediate belief. The intermediate belief is then averaged with the belief of neighbors to form the individual's belief at next time step. We find a set of minimal sufficient conditions under which the agents will learn the unknown state and reach consensus on their beliefs without any assumption on the private signal structure. The key enabler is a result that shows that using this update, agents will eventually forecast the indefinite future correctly

    Learning without Recall by Random Walks on Directed Graphs

    Get PDF
    We consider a network of agents that aim to learn some unknown state of the world using private observations and exchange of beliefs. At each time, agents observe private signals generated based on the true unknown state. Each agent might not be able to distinguish the true state based only on her private observations. This occurs when some other states are observationally equivalent to the true state from the agent's perspective. To overcome this shortcoming, agents must communicate with each other to benefit from local observations. We propose a model where each agent selects one of her neighbors randomly at each time. Then, she refines her opinion using her private signal and the prior of that particular neighbor. The proposed rule can be thought of as a Bayesian agent who cannot recall the priors based on which other agents make inferences. This learning without recall approach preserves some aspects of the Bayesian inference while being computationally tractable. By establishing a correspondence with a random walk on the network graph, we prove that under the described protocol, agents learn the truth exponentially fast in the almost sure sense. The asymptotic rate is expressed as the sum of the relative entropies between the signal structures of every agent weighted by the stationary distribution of the random walk.Comment: 6 pages, To Appear in Conference on Decision and Control 201

    Adaptive social learning

    Get PDF
    This paper investigates the learning foundations of economic models of social learning. We pursue the prevalent idea in economics that rational play is the outcome of a dynamic process of adaptation. Our learning approach offers us the possibility to clarify when and why the prevalent rational (equilibrium) view of social learning is likely to capture observed regularities in the field. In particular it enables us to address the issue of individual and interactive knowledge. We argue that knowledge about the private belief distribution is unlikely to be shared in most social learning contexts. Absent this mutual knowledge, we show that the long-run outcome of the adaptive process favors non-Bayesian rational play.social Learning ; informational herding ; adaptation ; analogies ; non-Bayesian updating

    Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging

    Full text link
    In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.Comment: 6 pages, To appear in Conference on Decision and Control 201

    Applications of Repeated Games in Wireless Networks: A Survey

    Full text link
    A repeated game is an effective tool to model interactions and conflicts for players aiming to achieve their objectives in a long-term basis. Contrary to static noncooperative games that model an interaction among players in only one period, in repeated games, interactions of players repeat for multiple periods; and thus the players become aware of other players' past behaviors and their future benefits, and will adapt their behavior accordingly. In wireless networks, conflicts among wireless nodes can lead to selfish behaviors, resulting in poor network performances and detrimental individual payoffs. In this paper, we survey the applications of repeated games in different wireless networks. The main goal is to demonstrate the use of repeated games to encourage wireless nodes to cooperate, thereby improving network performances and avoiding network disruption due to selfish behaviors. Furthermore, various problems in wireless networks and variations of repeated game models together with the corresponding solutions are discussed in this survey. Finally, we outline some open issues and future research directions.Comment: 32 pages, 15 figures, 5 tables, 168 reference

    Beliefs in Decision-Making Cascades

    Full text link
    This work explores a social learning problem with agents having nonidentical noise variances and mismatched beliefs. We consider an NN-agent binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on preceding agents' decisions. In addition, the agents have their own beliefs instead of the true prior, and have nonidentical noise variances in the private signal. We focus on the Bayes risk of the last agent, where preceding agents are selfish. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The effect of nonidentical noise levels in the two-agent case is also considered and analytical properties of the optimal belief curves are given. Next, we consider a predecessor selection problem wherein the subsequent agent of a certain belief chooses a predecessor from a set of candidates with varying beliefs. We characterize the decision region for choosing such a predecessor and argue that a subsequent agent with beliefs varying from the true prior often ends up selecting a suboptimal predecessor, indicating the need for a social planner. Lastly, we discuss an augmented intelligence design problem that uses a model of human behavior from cumulative prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin

    Social learning with local interactions

    Get PDF
    We study a simple dynamic model of social learning with local informational exter-nalities. There is a large population of agents, who repeatedly have to choose one, out of two, reversible actions, each of which is optimal in one, out of two, unknown states of the world. Each agent chooses rationally, on the basis of private information (s)he receives by a symmetric binary signal on the state, as well as the observation of the action chosen among their nearest neighbours. Actions can be updated at revision opportunities that agents receive in a random sequential order. Strategies are stationary, in that they do not depend on time, nor on location. We show that: if agents receive equally informative signals, and observe both neighbours, then the social learning process is not adequate and the process of actions converges ex-ponentially fast to a con�guration where some agents are permanently wrong; if agents are unequally informed, in that their signal is either fully informative or fully uninformative (both with positive probability), and observe one neighbour, then the social learning process is adequate and everybody will eventually choose the action that is correct given the state. Convergence, however, obtains very slowly, namely at rate pt: We relate the�findings with the literature on social learning and discuss the property of effciency of the information transmission mechanism under local interaction. <br><br> Keywords; social learning, bayesian learning, local informational external-ities, path dependence, consensus, clustering, convergence Rates
    corecore