1,394 research outputs found
Bayesian Causal Induction
Discovering causal relationships is a hard task, often hindered by the need
for intervention, and often requiring large amounts of data to resolve
statistical uncertainty. However, humans quickly arrive at useful causal
relationships. One possible reason is that humans extrapolate from past
experience to new, unseen situations: that is, they encode beliefs over causal
invariances, allowing for sound generalization from the observations they
obtain from directly acting in the world.
Here we outline a Bayesian model of causal induction where beliefs over
competing causal hypotheses are modeled using probability trees. Based on this
model, we illustrate why, in the general case, we need interventions plus
constraints on our causal hypotheses in order to extract causal information
from our experience.Comment: 4 pages, 4 figures; 2011 NIPS Workshop on Philosophy and Machine
Learnin
Free Energy and the Generalized Optimality Equations for Sequential Decision Making
The free energy functional has recently been proposed as a variational
principle for bounded rational decision-making, since it instantiates a natural
trade-off between utility gains and information processing costs that can be
axiomatically derived. Here we apply the free energy principle to general
decision trees that include both adversarial and stochastic environments. We
derive generalized sequential optimality equations that not only include the
Bellman optimality equations as a limit case, but also lead to well-known
decision-rules such as Expectimax, Minimax and Expectiminimax. We show how
these decision-rules can be derived from a single free energy principle that
assigns a resource parameter to each node in the decision tree. These resource
parameters express a concrete computational cost that can be measured as the
amount of samples that are needed from the distribution that belongs to each
node. The free energy principle therefore provides the normative basis for
generalized optimality equations that account for both adversarial and
stochastic environments.Comment: 10 pages, 2 figure
A Minimum Relative Entropy Principle for Learning and Acting
This paper proposes a method to construct an adaptive agent that is universal
with respect to a given class of experts, where each expert is an agent that
has been designed specifically for a particular environment. This adaptive
control problem is formalized as the problem of minimizing the relative entropy
of the adaptive agent from the expert that is most suitable for the unknown
environment. If the agent is a passive observer, then the optimal solution is
the well-known Bayesian predictor. However, if the agent is active, then its
past actions need to be treated as causal interventions on the I/O stream
rather than normal probability conditions. Here it is shown that the solution
to this new variational problem is given by a stochastic controller called the
Bayesian control rule, which implements adaptive behavior as a mixture of
experts. Furthermore, it is shown that under mild assumptions, the Bayesian
control rule converges to the control law of the most suitable expert.Comment: 36 pages, 11 figure
- …