196,900 research outputs found
On Non-Bayesian Social Learning
We study a model of information aggregation and social learning recently
proposed by Jadbabaie, Sandroni, and Tahbaz-Salehi, in which individual agents
try to learn a correct state of the world by iteratively updating their beliefs
using private observations and beliefs of their neighbors. No individual
agent's private signal might be informative enough to reveal the unknown state.
As a result, agents share their beliefs with others in their social
neighborhood to learn from each other. At every time step each agent receives a
private signal, and computes a Bayesian posterior as an intermediate belief.
The intermediate belief is then averaged with the belief of neighbors to form
the individual's belief at next time step. We find a set of minimal sufficient
conditions under which the agents will learn the unknown state and reach
consensus on their beliefs without any assumption on the private signal
structure. The key enabler is a result that shows that using this update,
agents will eventually forecast the indefinite future correctly
Consistency of Feature Markov Processes
We are studying long term sequence prediction (forecasting). We approach this
by investigating criteria for choosing a compact useful state representation.
The state is supposed to summarize useful information from the history. We want
a method that is asymptotically consistent in the sense it will provably
eventually only choose between alternatives that satisfy an optimality property
related to the used criterion. We extend our work to the case where there is
side information that one can take advantage of and, furthermore, we briefly
discuss the active setting where an agent takes actions to achieve desirable
outcomes.Comment: 16 LaTeX page
Algorithmic Identification of Probabilities
TThe problem is to identify a probability associated with a set of natural
numbers, given an infinite data sequence of elements from the set. If the given
sequence is drawn i.i.d. and the probability mass function involved (the
target) belongs to a computably enumerable (c.e.) or co-computably enumerable
(co-c.e.) set of computable probability mass functions, then there is an
algorithm to almost surely identify the target in the limit. The technical tool
is the strong law of large numbers. If the set is finite and the elements of
the sequence are dependent while the sequence is typical in the sense of
Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of
computable measures, then there is an algorithm to identify in the limit a
computable measure for which the sequence is typical (there may be more than
one such measure). The technical tool is the theory of Kolmogorov complexity.
We give the algorithms and consider the associated predictions.Comment: 19 pages LaTeX.Corrected errors and rewrote the entire paper. arXiv
admin note: text overlap with arXiv:1208.500
Non-Bayesian Social Learning, Second Version
We develop a dynamic model of opinion formation in social networks. Relevant information is spread throughout the network in such a way that no agent has enough data to learn a payoff-relevant parameter. Individuals engage in communication with their neighbors in order to learn from their experiences. However, instead of incorporating the views of their neighbors in a fully Bayesian manner, agents use a simple updating rule which linearly combines their personal experience and the views of their neighbors (even though the neighbors’ views may be quite inaccurate). This non-Bayesian learning rule is motivated by the formidable complexity required to fully implement Bayesian updating in networks. We show that, under mild assumptions, repeated interactions lead agents to successfully aggregate information and to learn the true underlying state of the world. This result holds in spite of the apparent naıvite of agents’ updating rule, the agents’ need for information from sources (i.e., other agents) the existence of which they may not be aware of, the possibility that the most persuasive agents in the network are precisely those least informed and with worst prior views, and the assumption that no agent can tell whether their own views or their neighbors’ views are more accurate.Social networks, learning, information aggregation
Successive Standardization of Rectangular Arrays
In this note we illustrate and develop further with mathematics and examples,
the work on successive standardization (or normalization) that is studied
earlier by the same authors in Olshen and Rajaratnam (2010) and Olshen and
Rajaratnam (2011). Thus, we deal with successive iterations applied to
rectangular arrays of numbers, where to avoid technical difficulties an array
has at least three rows and at least three columns. Without loss, an iteration
begins with operations on columns: first subtract the mean of each column; then
divide by its standard deviation. The iteration continues with the same two
operations done successively for rows. These four operations applied in
sequence completes one iteration. One then iterates again, and again, and
again,.... In Olshen and Rajaratnam (2010) it was argued that if arrays are
made up of real numbers, then the set for which convergence of these successive
iterations fails has Lebesgue measure 0. The limiting array has row and column
means 0, row and column standard deviations 1. A basic result on convergence
given in Olshen and Rajaratnam (2010) is true, though the argument in Olshen
and Rajaratnam (2010) is faulty. The result is stated in the form of a theorem
here, and the argument for the theorem is correct. Moreover, many graphics
given in Olshen and Rajaratnam (2010) suggest that but for a set of entries of
any array with Lebesgue measure 0, convergence is very rapid, eventually
exponentially fast in the number of iterations. Because we learned this set of
rules from Bradley Efron, we call it "Efron's algorithm". More importantly, the
rapidity of convergence is illustrated by numerical examples
- …