44,565 research outputs found
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
It is crucial to ask how agents can achieve goals by generating action plans
using only partial models of the world acquired through habituated
sensory-motor experiences. Although many existing robotics studies use a
forward model framework, there are generalization issues with high degrees of
freedom. The current study shows that the predictive coding (PC) and active
inference (AIF) frameworks, which employ a generative model, can develop better
generalization by learning a prior distribution in a low dimensional latent
state space representing probabilistic structures extracted from well
habituated sensory-motor trajectories. In our proposed model, learning is
carried out by inferring optimal latent variables as well as synaptic weights
for maximizing the evidence lower bound, while goal-directed planning is
accomplished by inferring latent variables for maximizing the estimated lower
bound. Our proposed model was evaluated with both simple and complex robotic
tasks in simulation, which demonstrated sufficient generalization in learning
with limited training data by setting an intermediate value for a
regularization coefficient. Furthermore, comparative simulation results show
that the proposed model outperforms a conventional forward model in
goal-directed planning, due to the learned prior confining the search of motor
plans within the range of habituated trajectories.Comment: 30 pages, 19 figure
Synchronization and Noise: A Mechanism for Regularization in Neural Systems
To learn and reason in the presence of uncertainty, the brain must be capable
of imposing some form of regularization. Here we suggest, through theoretical
and computational arguments, that the combination of noise with synchronization
provides a plausible mechanism for regularization in the nervous system. The
functional role of regularization is considered in a general context in which
coupled computational systems receive inputs corrupted by correlated noise.
Noise on the inputs is shown to impose regularization, and when synchronization
upstream induces time-varying correlations across noise variables, the degree
of regularization can be calibrated over time. The proposed mechanism is
explored first in the context of a simple associative learning problem, and
then in the context of a hierarchical sensory coding task. The resulting
qualitative behavior coincides with experimental data from visual cortex.Comment: 32 pages, 7 figures. under revie
- …