9,794 research outputs found
Learning without Recall: A Case for Log-Linear Learning
We analyze a model of learning and belief formation in networks in which
agents follow Bayes rule yet they do not recall their history of past
observations and cannot reason about how other agents' beliefs are formed. They
do so by making rational inferences about their observations which include a
sequence of independent and identically distributed private signals as well as
the beliefs of their neighboring agents at each time. Fully rational agents
would successively apply Bayes rule to the entire history of observations. This
leads to forebodingly complex inferences due to lack of knowledge about the
global network structure that causes those observations. To address these
complexities, we consider a Learning without Recall model, which in addition to
providing a tractable framework for analyzing the behavior of rational agents
in social networks, can also provide a behavioral foundation for the variety of
non-Bayesian update rules in the literature. We present the implications of
various choices for time-varying priors of such agents and how this choice
affects learning and its rate.Comment: in 5th IFAC Workshop on Distributed Estimation and Control in
Networked Systems, (NecSys 2015
Reasoning about Independence in Probabilistic Models of Relational Data
We extend the theory of d-separation to cases in which data instances are not
independent and identically distributed. We show that applying the rules of
d-separation directly to the structure of probabilistic models of relational
data inaccurately infers conditional independence. We introduce relational
d-separation, a theory for deriving conditional independence facts from
relational models. We provide a new representation, the abstract ground graph,
that enables a sound, complete, and computationally efficient method for
answering d-separation queries about relational models, and we present
empirical results that demonstrate effectiveness.Comment: 61 pages, substantial revisions to formalisms, theory, and related
wor
Distributed Learning from Interactions in Social Networks
We consider a network scenario in which agents can evaluate each other
according to a score graph that models some interactions. The goal is to design
a distributed protocol, run by the agents, that allows them to learn their
unknown state among a finite set of possible values. We propose a Bayesian
framework in which scores and states are associated to probabilistic events
with unknown parameters and hyperparameters, respectively. We show that each
agent can learn its state by means of a local Bayesian classifier and a
(centralized) Maximum-Likelihood (ML) estimator of parameter-hyperparameter
that combines plain ML and Empirical Bayes approaches. By using tools from
graphical models, which allow us to gain insight on conditional dependencies of
scores and states, we provide a relaxed probabilistic model that ultimately
leads to a parameter-hyperparameter estimator amenable to distributed
computation. To highlight the appropriateness of the proposed relaxation, we
demonstrate the distributed estimators on a social interaction set-up for user
profiling.Comment: This submission is a shorter work (for conference publication) of a
more comprehensive paper, already submitted as arXiv:1706.04081 (under review
for journal publication). In this short submission only one social set-up is
considered and only one of the relaxed estimators is proposed. Moreover, the
exhaustive analysis, carried out in the longer manuscript, is completely
missing in this versio
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
- …