186 research outputs found

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Motor Control of Rapid Eye Movements in Larval Zebrafish

    Get PDF
    Animals move the same body parts in diverse ways. How the central nervous system executes one action over related ones is poorly understood. To investigate this, I assessed the behavioural manifestation and neural control of saccadic eye rotations made by larval zebrafish, since these movements are simple and easy to investigate at a circuit level. I first classified the larva’s saccadic repertoire into 5 types, of which hunting specific convergent saccades and exploratory conjugate saccades were the main types used to orient vision. Convergent and conjugate saccades shared a nasal eye rotation, which had kinematic differences and similarities that suggested the rotation was made by overlapping but distinct populations of neurons between saccade types. I investigated this further, using two-photon Ca2+ imaging and selective circuit interventions to identify a circuit from rhombomere 5/6 to abducens internuclear neurons to motoneurons that was crucial to nasal eye rotations. Motoneurons had distinct activity patterns for convergent and conjugate saccades that were consistent with my behavioural observations and were explained largely by motoneuron kinematic tuning preferences. Surprisingly, some motoneurons also modulated activity according to saccade type independent of movement kinematics. In contrast, pre-synaptic internuclear neuron activity profiles were almost entirely explained by movement kinematics, but not neurons in rhombomere 5/6, which had mixed saccade type and kinematic encoding, like motoneurons. Regions exerting descending control on this circuit from the optic tectum and anterior pretectal nucleus had few neurons tuned to saccade kinematics compared to neurons selective for convergent saccades. My results suggest a transformation from encoding action type to encoding movement kinematics at successive circuit levels. This transformation was not monotonic or complete, and suggests that control of even simple, highly comparable, movements cannot be entirely described by a shared kinematic encoding scheme at a motor or premotor level

    When do Bursts Matter in the Primary Motor Cortex? Investigating Changes in the Intermittencies of Beta Rhythms Associated With Movement States

    Get PDF
    Brain activity exhibits significant temporal structure that is not well captured in the power spectrum. Recently, attention has shifted to characterising the properties of intermittencies in rhythmic neural activity (i.e. bursts), yet the mechanisms regulating them are unknown. Here, we present evidence from electrocorticography recordings made from the motor cortex to show that the statistics of bursts, such as duration or amplitude, in beta frequency (14-30Hz) rhythms significantly aid the classification of motor states such as rest, movement preparation, execution, and imagery. These features reflect nonlinearities not detectable in the power spectrum, with states increasing in nonlinearity from movement execution to preparation to rest. Further, we show using a computational model of the cortical microcircuit, constrained to account for burst features, that modulations of laminar specific inhibitory interneurons are responsible for temporal organization of activity. Finally, we show that temporal characteristics of spontaneous activity can be used to infer the balance of cortical integration between incoming sensory information and endogenous activity. Critically, we contribute to the understanding of how transient brain rhythms may underwrite cortical processing, which in turn, could inform novel approaches for brain state classification, and modulation with novel brain-computer interfaces

    Spiking Neural Networks

    Get PDF

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    A dendritic mechanism for decoding traveling waves: Principles and applications to motor cortex

    Get PDF
    Traveling waves of neuronal oscillations have been observed in many cortical regions, including the motor and sensory cortex. Such waves are often modulated in a task-dependent fashion although their precise functional role remains a matter of debate. Here we conjecture that the cortex can utilize the direction and wavelength of traveling waves to encode information. We present a novel neural mechanism by which such information may be decoded by the spatial arrangement of receptors within the dendritic receptor field. In particular, we show how the density distributions of excitatory and inhibitory receptors can combine to act as a spatial filter of wave patterns. The proposed dendritic mechanism ensures that the neuron selectively responds to specific wave patterns, thus constituting a neural basis of pattern decoding. We validate this proposal in the descending motor system, where we model the large receptor fields of the pyramidal tract neurons — the principle outputs of the motor cortex — decoding motor commands encoded in the direction of traveling wave patterns in motor cortex. We use an existing model of field oscillations in motor cortex to investigate how the topology of the pyramidal cell receptor field acts to tune the cells responses to specific oscillatory wave patterns, even when those patterns are highly degraded. The model replicates key findings of the descending motor system during simple motor tasks, including variable interspike intervals and weak corticospinal coherence. By additionally showing how the nature of the wave patterns can be controlled by modulating the topology of local intra-cortical connections, we hence propose a novel integrated neuronal model of encoding and decoding motor commands

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore