3,307 research outputs found
Learning without Recall by Random Walks on Directed Graphs
We consider a network of agents that aim to learn some unknown state of the
world using private observations and exchange of beliefs. At each time, agents
observe private signals generated based on the true unknown state. Each agent
might not be able to distinguish the true state based only on her private
observations. This occurs when some other states are observationally equivalent
to the true state from the agent's perspective. To overcome this shortcoming,
agents must communicate with each other to benefit from local observations. We
propose a model where each agent selects one of her neighbors randomly at each
time. Then, she refines her opinion using her private signal and the prior of
that particular neighbor. The proposed rule can be thought of as a Bayesian
agent who cannot recall the priors based on which other agents make inferences.
This learning without recall approach preserves some aspects of the Bayesian
inference while being computationally tractable. By establishing a
correspondence with a random walk on the network graph, we prove that under the
described protocol, agents learn the truth exponentially fast in the almost
sure sense. The asymptotic rate is expressed as the sum of the relative
entropies between the signal structures of every agent weighted by the
stationary distribution of the random walk.Comment: 6 pages, To Appear in Conference on Decision and Control 201
Common learning
Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that when each agent's signal space is finite, the agents will commonly learn the value of the parameter, that is, that the true value of the parameter will become approximate common knowledge. The essential step in this argument is to express the expectation of one agent's signals, conditional on those of the other agent, in terms of a Markov chain. This allows us to invoke a contraction mapping principle ensuring that if one agent's signals are close to those expected under a particular value of the parameter, then that agent expects the other agent's signals to be even closer to those expected under the parameter value. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case
Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences
This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering
Learning without Recall: A Case for Log-Linear Learning
We analyze a model of learning and belief formation in networks in which
agents follow Bayes rule yet they do not recall their history of past
observations and cannot reason about how other agents' beliefs are formed. They
do so by making rational inferences about their observations which include a
sequence of independent and identically distributed private signals as well as
the beliefs of their neighboring agents at each time. Fully rational agents
would successively apply Bayes rule to the entire history of observations. This
leads to forebodingly complex inferences due to lack of knowledge about the
global network structure that causes those observations. To address these
complexities, we consider a Learning without Recall model, which in addition to
providing a tractable framework for analyzing the behavior of rational agents
in social networks, can also provide a behavioral foundation for the variety of
non-Bayesian update rules in the literature. We present the implications of
various choices for time-varying priors of such agents and how this choice
affects learning and its rate.Comment: in 5th IFAC Workshop on Distributed Estimation and Control in
Networked Systems, (NecSys 2015
Common Learning
Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that that when each agent's signal space is finite, the agents will commonly learn its value, i.e., that the true value of the parameter will become approximate common-knowledge. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case.Common learning, Common belief, Private signals, Private beliefs
- …