5 research outputs found

    Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity

    Get PDF
    We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate-and-fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre- and post-synaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully-connected networks, we study sparse networks, where each neuron is connected only to a small number z << N of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry

    Bio-mimetic Spiking Neural Networks for unsupervised clustering of spatio-temporal data

    Get PDF
    Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system

    Simultaneous activation of multiple memory systems during learning : insights from electrophysiology and modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references.Parallel cortico-basal ganglia loops are thought to give rise to a diverse set of limbic, associative and motor functions, but little is known about how these loops operate and how their neural activities evolve during learning. To address these issues, single-unit activity was recorded simultaneously in dorsolateral (sensorimotor) and dorsomedial (associative) regions of the striatum as rats learned two versions of a conditional T-maze task. The results demonstrate that contrasting patterns of activity developed in these regions during task performance, and evolved with different training-related dynamics. Oscillatory activity is thought to enable memory storage and replay, and may encourage the efficient transmission of information between brain regions. In a second set of experiments, local field potentials (LFPs) were recorded simultaneously from the dorsal striatum and the CAl field of the hippocampus, as rats engaged in spontaneous and instructed behaviors in the T-maze. Two major findings are reported. First, striatal LFPs showed prominent theta-band rhythms that were strongly modulated during behavior. Second, striatal and hippocampal theta rhythms were modulated differently during T-maze performance, and in rats that successfully learned the task, became highly coherent during the choice period. To formalize the hypothesized contributions of dorsolateral and dorsomedial striatum during T-maze learning, a computational model was developed. This model localizes a model-free reinforcement learning (RL) system to the sensorimotor cortico-basal ganglia loop and localizes a model-based RL system to a network of structures including the associative cortico-basal ganglia loop and the hippocampus. Two models of dorsomedial striatal function were investigated, both of which can account for the patterns of activation observed during T-maze training. The two models make differing predictions regarding activation of the dorsomedial striatum following lesions of the model-free system, depending on whether it serves a direct role in action selection through participation in a model-based planning system or whether it participates in arbitrating between the model-free and model-based controllers. Combined, the work presented in this thesis shows that a large network of forebrain structures is engaged during procedural learning. The results suggest that coordination across regions may be required for successful learning and/or task performance, and that the different regions may contribute to behavioral performance by performing distinct RL computations.by Catherine Ann Thorn.Ph.D

    Form vs. Function: Theory and Models for Neuronal Substrates

    Get PDF
    The quest for endowing form with function represents the fundamental motivation behind all neural network modeling. In this thesis, we discuss various functional neuronal architectures and their implementation in silico, both on conventional computer systems and on neuromorpic devices. Necessarily, such casting to a particular substrate will constrain their form, either by requiring a simplified description of neuronal dynamics and interactions or by imposing physical limitations on important characteristics such as network connectivity or parameter precision. While our main focus lies on the computational properties of the studied models, we augment our discussion with rigorous mathematical formalism. We start by investigating the behavior of point neurons under synaptic bombardment and provide analytical predictions of single-unit and ensemble statistics. These considerations later become useful when moving to the functional network level, where we study the effects of an imperfect physical substrate on the computational properties of several cortical networks. Finally, we return to the single neuron level to discuss a novel interpretation of spiking activity in the context of probabilistic inference through sampling. We provide analytical derivations for the translation of this ``neural sampling'' framework to networks of biologically plausible and hardware-compatible neurons and later take this concept beyond the realm of brain science when we discuss applications in machine learning and analogies to solid-state systems
    corecore