61,234 research outputs found
For whom will the Bayesian agents vote?
Within an agent-based model where moral classifications are socially learned,
we ask if a population of agents behaves in a way that may be compared with
conservative or liberal positions in the real political spectrum. We assume
that agents first experience a formative period, in which they adjust their
learning style acting as supervised Bayesian adaptive learners. The formative
phase is followed by a period of social influence by reinforcement learning. By
comparing data generated by the agents with data from a sample of 15000 Moral
Foundation questionnaires we found the following. 1. The number of information
exchanges in the formative phase correlates positively with statistics
identifying liberals in the social influence phase. This is consistent with
recent evidence that connects the dopamine receptor D4-7R gene, political
orientation and early age social clique size. 2. The learning algorithms that
result from the formative phase vary in the way they treat novelty and
corroborative information with more conservative-like agents treating it more
equally than liberal-like agents. This is consistent with the correlation
between political affiliation and the Openness personality trait reported in
the literature. 3. Under the increase of a model parameter interpreted as an
external pressure, the statistics of liberal agents resemble more those of
conservative agents, consistent with reports on the consequences of external
threats on measures of conservatism. We also show that in the social influence
phase liberal-like agents readapt much faster than conservative-like agents
when subjected to changes on the relevant set of moral issues. This suggests a
verifiable dynamical criterium for attaching liberal or conservative labels to
groups.Comment: 31 pages, 5 figure
A Minimum Relative Entropy Principle for Learning and Acting
This paper proposes a method to construct an adaptive agent that is universal
with respect to a given class of experts, where each expert is an agent that
has been designed specifically for a particular environment. This adaptive
control problem is formalized as the problem of minimizing the relative entropy
of the adaptive agent from the expert that is most suitable for the unknown
environment. If the agent is a passive observer, then the optimal solution is
the well-known Bayesian predictor. However, if the agent is active, then its
past actions need to be treated as causal interventions on the I/O stream
rather than normal probability conditions. Here it is shown that the solution
to this new variational problem is given by a stochastic controller called the
Bayesian control rule, which implements adaptive behavior as a mixture of
experts. Furthermore, it is shown that under mild assumptions, the Bayesian
control rule converges to the control law of the most suitable expert.Comment: 36 pages, 11 figure
- …