210 research outputs found
Computation in Balanced Networks
In the cortex, neural activity is noisy, irregular and asynchronous – a consequence of dynamically balancing excitatory and inhibitory input to neurons. Despite this noisy balancing, the brain is capable of performing a vast array of incredibly difficult computations. This is mysterious, because noise and irregularity are usually associated with poor performance. We ask, how can the cortex compute in a noisy background? The observation of orientation tuning in the visual cortex suggests that structured connectivity is important. We propose a unifying model of cortical connectivity in which weak structured connectivity is embedded in strong random background connectivity. This connectivity can simultaneously produce orientation tuning and irregular, asynchronous dynamics. We find that structure can boost computational performance, by amplifying orientation tuning. We then ask; why is cortical activity noisy? Surprisingly, we find that balanced network noise can also improve computational performance, by increasing the computational operating range of the cortex. The mechanism is simple; noise allows very large signals to become available for computation, despite the small operating range of individual neurons. However, this improvement comes at a price; for small signals, balanced network noise degrades performance. This exemplifies a performance-stability trade-off. As a corollary, we find that the contrast invariance of orientation tuned cells in the visual cortex is a consequence of this computational stability. Finally, we ask; does noise co-variability impair computation? It is known that correlated variability can degrade the computational performance of a network, especially if many neurons are strongly co-variant. We find that correlations in balanced networks are weak, but not weak enough to be ignored in computation because they affect decoding. Together, these results constitute an important link between neural computation and dynamics, opening the door to a reconciliation between conflicting theories of randomness and structure
Recommended from our members
Methods for Building Network Models of Neural Circuits
Artificial recurrent neural networks (RNNs) are powerful models for understanding and modeling dynamic computation in neural circuits. As such, RNNs that have been constructed to perform tasks analogous to typical behaviors studied in systems neuroscience are useful tools for understanding the biophysical mechanisms that mediate those behaviors. There has been significant progress in recent years developing gradient-based learning methods to construct RNNs. However, the majority of this progress has been restricted to network models that transmit information through continuous state variables since these methods require the input-output function of individual neuronal units to be differentiable. Overwhelmingly, biological neurons transmit information by discrete action potentials. Spiking model neurons are not differentiable and thus gradient-based methods for training neural networks cannot be applied to them.
This work focuses on the development of supervised learning methods for RNNs that do not require the computation of derivatives. Because the methods we develop do not rely on the differentiability of the neural units, we can use them to construct realistic RNNs of spiking model neurons that perform a variety of benchmark tasks, and also to build networks trained directly from experimental data. Surprisingly, spiking networks trained with these non-gradient methods do not require significantly more neural units to perform tasks than their continuous-variable model counterparts. The crux of the method draws a direct correspondence between the dynamical variables of more abstract continuous-variable RNNs and spiking network models. The relationship between these two commonly used model classes has historically been unclear and, by resolving many of these issues, we offer a perspective on the appropriate use and interpretation of continuous-variable models as they relate to understanding network computation in biological neural circuits.
Although the main advantage of these methods is their ability to construct realistic spiking network models, they can equally well be applied to continuous-variable network models. An example is the construction of continuous-variable RNNs that perform tasks for which they provide performance and computational cost competitive with those of traditional methods that compute derivatives and outperform previous non-gradient-based network training approaches.
Collectively, this thesis presents efficient methods for constructing realistic neural network models that can be used to understand computation in biological neural networks and provides a unified perspective on how the dynamic quantities in these models relate to each other and to quantities that can be observed and extracted from experimental recordings of neurons
Motif Statistics and Spike Correlations in Neuronal Networks
Motifs are patterns of subgraphs of complex networks. We studied the impact
of such patterns of connectivity on the level of correlated, or synchronized,
spiking activity among pairs of cells in a recurrent network model of integrate
and fire neurons. For a range of network architectures, we find that the
pairwise correlation coefficients, averaged across the network, can be closely
approximated using only three statistics of network connectivity. These are the
overall network connection probability and the frequencies of two second-order
motifs: diverging motifs, in which one cell provides input to two others, and
chain motifs, in which two cells are connected via a third intermediary cell.
Specifically, the prevalence of diverging and chain motifs tends to increase
correlation. Our method is based on linear response theory, which enables us to
express spiking statistics using linear algebra, and a resumming technique,
which extrapolates from second order motifs to predict the overall effect of
coupling on network correlation. Our motif-based results seek to isolate the
effect of network architecture perturbatively from a known network state
Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons
Striatal projection neurons form a sparsely-connected inhibitory network, and
this arrangement may be essential for the appropriate temporal organization of
behavior. Here we show that a simplified, sparse inhibitory network of
Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal
population activity, as observed in brain slices [Carrillo-Reid et al., J.
Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to
determine the conditions under which sparse inhibitory networks form
anti-correlated cell assemblies with time-varying activity of individual cells.
We found that under these conditions the network displays an input-specific
sequence of cell assembly switching, that effectively discriminates similar
inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9
(2013) e1002954] that GABAergic connections between striatal projection neurons
allow stimulus-selective, temporally-extended sequential activation of cell
assemblies. Furthermore, we help to show how altered intrastriatal GABAergic
signaling may produce aberrant network-level information processing in
disorders such as Parkinson's and Huntington's diseases.Comment: 22 pages, 9 figure
Impact of network structure and cellular response on spike time correlations
Novel experimental techniques reveal the simultaneous activity of larger and
larger numbers of neurons. As a result there is increasing interest in the
structure of cooperative -- or correlated -- activity in neural populations,
and in the possible impact of such correlations on the neural code. A
fundamental theoretical challenge is to understand how the architecture of
network connectivity along with the dynamical properties of single cells shape
the magnitude and timescale of correlations. We provide a general approach to
this problem by extending prior techniques based on linear response theory. We
consider networks of general integrate-and-fire cells with arbitrary
architecture, and provide explicit expressions for the approximate
cross-correlation between constituent cells. These correlations depend strongly
on the operating point (input mean and variance) of the neurons, even when
connectivity is fixed. Moreover, the approximations admit an expansion in
powers of the matrices that describe the network architecture. This expansion
can be readily interpreted in terms of paths between different cells. We apply
our results to large excitatory-inhibitory networks, and demonstrate first how
precise balance --- or lack thereof --- between the strengths and timescales of
excitatory and inhibitory synapses is reflected in the overall correlation
structure of the network. We then derive explicit expressions for the average
correlation structure in randomly connected networks. These expressions help to
identify the important factors that shape coordinated neural activity in such
networks
Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT
Neuronal variability in sensory cortex predicts perceptual decisions. This relationship, termed choice probability (CP), can arise from sensory variability biasing behaviour and from top-down signals reflecting behaviour. To investigate the interaction of these mechanisms during the decision-making process, we use a hierarchical network model composed of reciprocally connected sensory and integration circuits. Consistent with monkey behaviour in a fixed-duration motion discrimination task, the model integrates sensory evidence transiently, giving rise to a decaying bottom-up CP component. However, the dynamics of the hierarchical loop recruits a concurrently rising top-down component, resulting in sustained CP. We compute the CP time-course of neurons in the medial temporal area (MT) and find an early transient component and a separate late contribution reflecting decision build-up. The stability of individual CPs and the dynamics of noise correlations further support this decomposition. Our model provides a unified understanding of the circuit dynamics linking neural and behavioural variability
Recommended from our members
The spatiotemporal dynamics of human focal seizures
Spontaneous human focal seizures can present with a plethora of behavioral manifestations that vary according to the affected cortical regions; however, several key features have been consistently observed. During my doctoral studies, I applied both theoretical and experimental methods to study mechanisms underpinning these consistently seen dynamics. I first analyzed human intracranial EEG recordings, describing statistical methods for measuring their electrophysiological signatures. I next proposed several neurophysiological hypotheses that could explain seizure dynamics and verified them in rodent seizure models. Finally, a computational model was developed, successfully explaining how the complex spatiotemporal evolution of focal seizures emerges from simple neurophysiological principles.
In Chapter 1, the long-standing behavioral manifestations and the most up-to-date electrophysiology findings are reviewed. This section details the inspiration for the studies reported in the subsequent chapters.
In Chapter 2, I describe several statistical methods for estimating traveling wave velocities. I show most ictal discharges can be described as traveling waves whose velocities contain rich information about the stages of seizure evolution. I compare performance of various statistical methods and propose a robust approach to boost the quality of each method’s estimation results.
In Chapter 3, I show how inhibition modulates seizure propagation patterns. Surround inhibition spatially restrains focal seizures and masks excitatory projections of ictal activities. When compromised, two patterns of seizure propagation emerge according to the position of inhibition defects relative to the ictal focus. I show that two distant ictal foci can communicate via physiological connectivity without any chronic rewiring processes – confirming the existence of long-range propagation pathways that could lead to epileptic network formation.
In Chapter 4, I show that thalamic inputs might be necessary for interictal epileptiform discharges (IEDs). The relative positions between IEDs and ictal foci indicate that surround inhibition, shown in the previous chapter, can be exhausted by repetitive exposure to ictal projections.
In Chapter 5, I propose a neural network model that can explain both long-standing behavioral observations of seizures and account for the most up-to-date electrophysiological recordings of spontaneous human focal seizures. The model relies on few assumptions, all of which are proved or supported in earlier chapters of this thesis. The model explains phasic evolution of seizure dynamics – how the commonly observed patterns arise from simple neurophysiological principles, as well as seizure onset subtypes, traveling wave directions and speeds. The model also predicts how spontaneous seizures might arise from synaptic plasticity. The chapter ends with a discussion of the model’s implications and future work.
The thesis is organized in a way that each chapter can be read independently, with Chapter 5 summarizing the central theory spanning the whole study. Each chapter is also tightly linked to a clinically relevant question. In sum, the dissertation’s goal is to provide an in-principle understanding of focal seizure dynamics. With rapid advancement of clinical and experimental tools, I believe this work provides a roadmap for future therapies for epilepsy patients
Oscillatory mechanisms for controlling information flow in neural circuits
Mammalian brains generate complex, dynamic structures of oscillatory activity, in which
distributed regions transiently engage in coherent oscillation, often at specific stages in behavioural
or cognitive tasks. Much is now known about the dynamics underlying local circuit
synchronisation and the phenomenology of where and when such activity occurs. While
oscillations have been implicated in many high level processes, for most such phenomena we
cannot say with confidence precisely what they are doing at an algorithmic or implementational
level. This thesis presents work towards understanding the dynamics and possible function of large
scale oscillatory network activity. We first address the question of how coherent oscillatory activity
emerges between local networks by measuring phase response curves of an oscillating network in
vitro. The network phase response curves provide mechanistic insight into inter-region
synchronisation of local network oscillators. Highly simplified firing models are shown to
reproduce the experimental data with remarkable accuracy. We then focus on one hypothesised
computational function of network oscillations; flexibly controlling the gain of signal flow between
anatomically connected networks. We investigate coding strategies and algorithmic operations that
support flexible control of signal flow by oscillations, and their implementation by network
dynamics. We identify two readout algorithms which selectively recover population rate coded
signal with specific oscillatory modulations while ignoring other distracting inputs. By designing a
spiking network model that implements one of these mechanisms, we demonstrate oscillatory
control of signal flow in convergent pathways. We then investigate constraints on the structures of
oscillatory activity that can be used to accurately and selectively control signal flow. Our results
suggest that for inputs to be accurately distinguished from one another their oscillatory modulations
must be close to orthogonal. This has implications for interpreting in vivo oscillatory activity, and
may be an organising principle for the spatio-temporal structure of brain oscillations
- …