15,549 research outputs found
Linking Cellular Mechanisms to Behavior: Entorhinal Persistent Spiking and Membrane Potential Oscillations May Underlie Path Integration, Grid Cell Firing, and Episodic Memory
The entorhinal cortex plays an important role in spatial memory and episodic memory functions. These functions may result from cellular mechanisms for integration of the afferent input to entorhinal cortex. This article reviews physiological data on persistent spiking and membrane potential oscillations in entorhinal cortex then presents models showing how both these cellular mechanisms could contribute to properties observed during unit recording, including grid cell firing, and how they could underlie behavioural functions including path integration. The interaction of oscillations and persistent firing could contribute to encoding and retrieval of trajectories through space and time as a mechanism relevant to episodic memory.Silvio O. Conte Center (NIMH MH71702, MH60450); National Institute of Mental Health Research (MH60013, MH61492); National Science Foundation (SLC SBE 0354378); National Institute of Drug Abuse (DA16454)
Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells
The activity of the grid cell population in the medial entorhinal cortex
(MEC) of the mammalian brain forms a vector representation of the self-position
of the animal. Recurrent neural networks have been proposed to explain the
properties of the grid cells by updating the neural activity vector based on
the velocity input of the animal. In doing so, the grid cell system effectively
performs path integration. In this paper, we investigate the algebraic,
geometric, and topological properties of grid cells using recurrent network
models. Algebraically, we study the Lie group and Lie algebra of the recurrent
transformation as a representation of self-motion. Geometrically, we study the
conformal isometry of the Lie group representation where the local displacement
of the activity vector in the neural space is proportional to the local
displacement of the agent in the 2D physical space. Topologically, the compact
abelian Lie group representation automatically leads to the torus topology
commonly assumed and observed in neuroscience. We then focus on a simple
non-linear recurrent model that underlies the continuous attractor neural
networks of grid cells. Our numerical experiments show that conformal isometry
leads to hexagon periodic patterns in the grid cell responses and our model is
capable of accurate path integration. Code is available at
\url{https://github.com/DehongXu/grid-cell-rnn}
Quantifying the Evolutionary Self Structuring of Embodied Cognitive Networks
We outline a possible theoretical framework for the quantitative modeling of
networked embodied cognitive systems. We notice that: 1) information self
structuring through sensory-motor coordination does not deterministically occur
in Rn vector space, a generic multivariable space, but in SE(3), the group
structure of the possible motions of a body in space; 2) it happens in a
stochastic open ended environment. These observations may simplify, at the
price of a certain abstraction, the modeling and the design of self
organization processes based on the maximization of some informational
measures, such as mutual information. Furthermore, by providing closed form or
computationally lighter algorithms, it may significantly reduce the
computational burden of their implementation. We propose a modeling framework
which aims to give new tools for the design of networks of new artificial self
organizing, embodied and intelligent agents and the reverse engineering of
natural ones. At this point, it represents much a theoretical conjecture and it
has still to be experimentally verified whether this model will be useful in
practice.
A hierarchical anti-Hebbian network model for the formation of spatial cells in three-dimensional space.
Three-dimensional (3D) spatial cells in the mammalian hippocampal formation are believed to support the existence of 3D cognitive maps. Modeling studies are crucial to comprehend the neural principles governing the formation of these maps, yet to date very few have addressed this topic in 3D space. Here we present a hierarchical network model for the formation of 3D spatial cells using anti-Hebbian network. Built on empirical data, the model accounts for the natural emergence of 3D place, border, and grid cells, as well as a new type of previously undescribed spatial cell type which we call plane cells. It further explains the plausible reason behind the place and grid-cell anisotropic coding that has been observed in rodents and the potential discrepancy with the predicted periodic coding during 3D volumetric navigation. Lastly, it provides evidence for the importance of unsupervised learning rules in guiding the formation of higher-dimensional cognitive maps
Conformal Normalization in Recurrent Neural Network of Grid Cells
Grid cells in the entorhinal cortex of the mammalian brain exhibit striking
hexagon firing patterns in their response maps as the animal (e.g., a rat)
navigates in a 2D open environment. The responses of the population of grid
cells collectively form a vector in a high-dimensional neural activity space,
and this vector represents the self-position of the agent in the 2D physical
space. As the agent moves, the vector is transformed by a recurrent neural
network that takes the velocity of the agent as input. In this paper, we
propose a simple and general conformal normalization of the input velocity for
the recurrent neural network, so that the local displacement of the position
vector in the high-dimensional neural space is proportional to the local
displacement of the agent in the 2D physical space, regardless of the direction
of the input velocity. Our numerical experiments on the minimally simple linear
and non-linear recurrent networks show that conformal normalization leads to
the emergence of the hexagon grid patterns. Furthermore, we derive a new
theoretical understanding that connects conformal normalization to the
emergence of hexagon grid patterns in navigation tasks
Statistical Physics and Representations in Real and Artificial Neural Networks
This document presents the material of two lectures on statistical physics
and neural representations, delivered by one of us (R.M.) at the Fundamental
Problems in Statistical Physics XIV summer school in July 2017. In a first
part, we consider the neural representations of space (maps) in the
hippocampus. We introduce an extension of the Hopfield model, able to store
multiple spatial maps as continuous, finite-dimensional attractors. The phase
diagram and dynamical properties of the model are analyzed. We then show how
spatial representations can be dynamically decoded using an effective Ising
model capturing the correlation structure in the neural data, and compare
applications to data obtained from hippocampal multi-electrode recordings and
by (sub)sampling our attractor model. In a second part, we focus on the problem
of learning data representations in machine learning, in particular with
artificial neural networks. We start by introducing data representations
through some illustrations. We then analyze two important algorithms, Principal
Component Analysis and Restricted Boltzmann Machines, with tools from
statistical physics
Linking Cellular Mechanisms to Behavior: Entorhinal Persistent Spiking and Membrane Potential Oscillations May Underlie Path Integration, Grid Cell Firing, and Episodic Memory
The entorhinal cortex plays an important role in spatial memory and episodic memory functions. These functions may result from cellular mechanisms for integration of the afferent input to entorhinal cortex. This article reviews physiological data on persistent spiking and membrane potential oscillations in entorhinal cortex then presents models showing how both these cellular mechanisms could contribute to properties observed during unit recording, including grid cell firing, and how they could underlie behavioural functions including path integration. The interaction of oscillations and persistent firing could contribute to encoding and retrieval of trajectories through space and time as a mechanism relevant to episodic memory
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624
- …