3,631 research outputs found
Action and behavior: a free-energy formulation
We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception
Reinforcement learning or active inference?
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain
Spiking neurons with short-term synaptic plasticity form superior generative networks
Spiking networks that perform probabilistic inference have been proposed both
as models of cortical computation and as candidates for solving problems in
machine learning. However, the evidence for spike-based computation being in
any way superior to non-spiking alternatives remains scarce. We propose that
short-term plasticity can provide spiking networks with distinct computational
advantages compared to their classical counterparts. In this work, we use
networks of leaky integrate-and-fire neurons that are trained to perform both
discriminative and generative tasks in their forward and backward information
processing paths, respectively. During training, the energy landscape
associated with their dynamics becomes highly diverse, with deep attractor
basins separated by high barriers. Classical algorithms solve this problem by
employing various tempering techniques, which are both computationally
demanding and require global state updates. We demonstrate how similar results
can be achieved in spiking networks endowed with local short-term synaptic
plasticity. Additionally, we discuss how these networks can even outperform
tempering-based approaches when the training data is imbalanced. We thereby
show how biologically inspired, local, spike-triggered synaptic dynamics based
simply on a limited pool of synaptic resources can allow spiking networks to
outperform their non-spiking relatives.Comment: corrected typo in abstrac
Spontaneous symmetry breaking in generative diffusion models
Generative diffusion models have recently emerged as a leading approach for
generating high-dimensional data. In this paper, we show that the dynamics of
these models exhibit a spontaneous symmetry breaking that divides the
generative dynamics into two distinct phases: 1) A linear steady-state dynamics
around a central fixed-point and 2) an attractor dynamics directed towards the
data manifold. These two "phases" are separated by the change in stability of
the central fixed-point, with the resulting window of instability being
responsible for the diversity of the generated samples. Using both theoretical
and empirical evidence, we show that an accurate simulation of the early
dynamics does not significantly contribute to the final generation, since early
fluctuations are reverted to the central fixed point. To leverage this insight,
we propose a Gaussian late initialization scheme, which significantly improves
model performance, achieving up to 3x FID improvements on fast samplers, while
also increasing sample diversity (e.g., racial composition of generated CelebA
images). Our work offers a new way to understand the generative dynamics of
diffusion models that has the potential to bring about higher performance and
less biased fast-samplers.Comment: typo correcte
Online Discrimination of Nonlinear Dynamics with Switching Differential Equations
How to recognise whether an observed person walks or runs? We consider a
dynamic environment where observations (e.g. the posture of a person) are
caused by different dynamic processes (walking or running) which are active one
at a time and which may transition from one to another at any time. For this
setup, switching dynamic models have been suggested previously, mostly, for
linear and nonlinear dynamics in discrete time. Motivated by basic principles
of computations in the brain (dynamic, internal models) we suggest a model for
switching nonlinear differential equations. The switching process in the model
is implemented by a Hopfield network and we use parametric dynamic movement
primitives to represent arbitrary rhythmic motions. The model generates
observed dynamics by linearly interpolating the primitives weighted by the
switching variables and it is constructed such that standard filtering
algorithms can be applied. In two experiments with synthetic planar motion and
a human motion capture data set we show that inference with the unscented
Kalman filter can successfully discriminate several dynamic processes online
- …