582 research outputs found
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
Training issues and learning algorithms for feedforward and recurrent neural networks
Ph.DDOCTOR OF PHILOSOPH
A reafferent and feed-forward model of song syntax generation in the Bengalese finch
Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons
Stability conditions of Hopfield ring networks with discontinuous piecewise-affine activation functions
International audienceRing networks, a particular form of Hopfield neural networks, can be used in computational neurosciences in order to model the activity of place cells or head-direction cells. The behaviour of these models is highly dependent on their recurrent synaptic connectivity matrix and on individual neurons' activation function, which must be chosen appropriately to obtain physiologically meaningful conclusions. In this article, we propose some simpler ways to tune this synaptic connectivity matrix compared to existing literature so as to achieve stability in a ring attractor network with a piece-wise affine activation functions, and we link these results to the possible stable states the network can converge to
Combinatorial geometry of neural codes, neural data analysis, and neural networks
This dissertation explores applications of discrete geometry in mathematical
neuroscience. We begin with convex neural codes, which model the activity of
hippocampal place cells and other neurons with convex receptive fields. In
Chapter 4, we introduce order-forcing, a tool for constraining convex
realizations of codes, and use it to construct new examples of non-convex codes
with no local obstructions. In Chapter 5, we relate oriented matroids to convex
neural codes, showing that a code has a realization with convex polytopes iff
it is the image of a representable oriented matroid under a neural code
morphism. We also show that determining whether a code is convex is at least as
difficult as determining whether an oriented matroid is representable, implying
that the problem of determining whether a code is convex is NP-hard. Next, we
turn to the problem of the underlying rank of a matrix. This problem is
motivated by the problem of determining the dimensionality of (neural) data
which has been corrupted by an unknown monotone transformation. In Chapter 6,
we introduce two tools for computing underlying rank, the minimal nodes and the
Radon rank. We apply these to analyze calcium imaging data from a larval
zebrafish. In Chapter 7, we explore the underlying rank in more detail,
establish connections to oriented matroid theory, and show that computing
underlying rank is also NP-hard. Finally, we study the dynamics of
threshold-linear networks (TLNs), a simple model of the activity of neural
circuits. In Chapter 9, we describe the nullcline arrangement of a threshold
linear network, and show that a subset of its chambers are an attracting set.
In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs),
which are TLNs defined from a directed graph. We prove that if the graph of a
CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a
fixed point.Comment: 193 pages, 69 figure
- …