1,993 research outputs found
The Kinetic Basis of Morphogenesis
It has been shown recently (Shalygo, 2014) that stationary and dynamic
patterns can arise in the proposed one-component model of the analog
(continuous state) kinetic automaton, or kinon for short, defined as a
reflexive dynamical system with active transport. This paper presents
extensions of the model, which increase further its complexity and tunability,
and shows that the extended kinon model can produce spatio-temporal patterns
pertaining not only to pattern formation but also to morphogenesis in real
physical and biological systems. The possible applicability of the model to
morphogenetic engineering and swarm robotics is also discussed.Comment: 8 pages. Submitted to the 13th European Conference on Artificial Life
(ECAL-2015) on March 10, 2015. Accepted on April 28, 201
A roadmap to integrate astrocytes into Systems Neuroscience.
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
Adaptive and Topological Deep Learning with applications to Neuroscience
Deep Learning and neuroscience have developed a two way relationship with each informing the other. Neural networks, the main tools at the heart of Deep Learning, were originally inspired by connectivity in the brain and have now proven to be critical to state-of-the-art computational neuroscience methods. This dissertation explores this relationship, first, by developing an adaptive sampling method for a neural network-based partial different equation solver and then by developing a topological deep learning framework for neural spike decoding. We demonstrate that our adaptive scheme is convergent and more accurate than DGM -- as long as the residual mirrors the local error -- at the same number of training steps and using the same or less number of training points. We present a multitude of tests applied to selected PDEs discussing the robustness of our scheme.
Next, we further illustrate the partnership between deep learning and neuroscience by decoding neural activity using a novel neural network architecture developed to exploit the underlying connectivity of the data by employing tools from Topological Data Analysis. Neurons encode information like external stimuli or allocentric location by generating firing patterns where specific ensembles of neurons fire simultaneously for one value. Understanding, representing, and decoding these neural structures require models that encompass higher order connectivity than traditional graph-based models may provide. Our framework combines unsupervised simplicial complex discovery with the power of deep learning via a new architecture we develop herein called a simplicial convolutional recurrent neural network (SCRNN). Simplicial complexes, topological spaces that use not only vertices and edges but also higher-dimensional objects, naturally generalize graphs and capture more than just pairwise relationships. The effectiveness and versatility of the SCRNN is demonstrated on head direction data to test its performance and then applied to grid cell datasets with the task to automatically predict trajectories
Micro-, Meso- and Macro-Dynamics of the Brain
Neurosciences, Neurology, Psychiatr
Using Grid Cells for Navigation
SummaryMammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation
Recommended from our members
A Neural Signal Processor for Low-Latency Spike Inference
This thesis describes the development of a system that can assign identities to a population of single-units, in multi-electrode recordings, at single-spike resolution with low-latency. The system has two parts. The first is a Field-Programmable Gate Array (FPGA)-based Neural Signal Processor (NSP) that receives raw input and generates labelled spikes as output, a process referred to as real-time spike inference. The second is a piece of software (Spiketag) that runs on a PC, communicates with the NSP, and generates a spike-sorted model to guide the real-time spike inference. The NSP provides clocks and control signals to five 32-channel INTAN RHD2132 chips to manage the acquisition of 160 channels of raw neural data. In parallel, the NSP further filters, detects and extracts extracellular spike waveforms from the raw neural data recorded by tetrodes or silicon probes and assigns single-unit identity to each detected spike. A set of Python application programming interfaces (APIs) was developed in Spiketag to enable the communication between the NSP and the PC. These APIs allow the NSP to obtain a model from the PC, which holds parameters such as reference channels, spike detection thresholds, spike feature transformation matrix and vector quantized clusters generated by spike sorting a short recording session. Using the spike-sorted model, the NSP performs data acquisition and real-time spike inference simultaneously. Algorithmic modules were implemented in the FPGA and pipelined to compute during 40 ms acquisition intervals. At the output end of the FPGA NSP, the real-time assigned single-unit identity (spike-id) is packaged with the timestamp, the electrode group, and the spike features as a spike-id packet. Spike-id packets are asynchronously transmitted through a low-latency Peripheral Component Interconnect Express (PCIe) interface to the PC, producing the real-time spike trains. The real-time spike trains can be used for further processing, such as real-time decoding. Several types of ground-truth data, including intracellular/extracellular paired recordings, synthesized
tetrode extracellular waveforms with ground-truth spike timing and high-channel-count silicon probe recordings with ground-truth animal positions during navigation were used to validate the low-latency (1 ms) and high-accuracy (as high as state-of-the-art offline sorting and decoding algorithms) of the NSP’s real-time spike inference and the NSP-based
real-time population decoding performance
Is there an integrative center in the vertebrate brain-stem? A robotic evaluation of a model of the reticular formation viewed as an action selection device
Neurobehavioral data from intact, decerebrate, and neonatal rats, suggests that the reticular formation provides
a brainstem substrate for action selection in the vertebrate central nervous system. In this article, Kilmer,
McCulloch and Blum’s (1969, 1997) landmark reticular formation model is described and re-evaluated, both in
simulation and, for the first time, as a mobile robot controller. Particular model configurations are found to
provide effective action selection mechanisms in a robot survival task using either simulated or physical robots.
The model’s competence is dependent on the organization of afferents from model sensory systems, and a genetic
algorithm search identified a class of afferent configurations which have long survival times. The results support
our proposal that the reticular formation evolved to provide effective arbitration between innate behaviors
and, with the forebrain basal ganglia, may constitute the integrative, ’centrencephalic’ core of vertebrate brain
architecture. Additionally, the results demonstrate that the Kilmer et al. model provides an alternative form of
robot controller to those usually considered in the adaptive behavior literature
- …