134 research outputs found

    Learning Spiking Neural Systems with the Event-Driven Forward-Forward Process

    Full text link
    We develop a novel credit assignment algorithm for information processing with spiking neurons without requiring feedback synapses. Specifically, we propose an event-driven generalization of the forward-forward and the predictive forward-forward learning processes for a spiking neural system that iteratively processes sensory input over a stimulus window. As a result, the recurrent circuit computes the membrane potential of each neuron in each layer as a function of local bottom-up, top-down, and lateral signals, facilitating a dynamic, layer-wise parallel form of neural computation. Unlike spiking neural coding, which relies on feedback synapses to adjust neural electrical activity, our model operates purely online and forward in time, offering a promising way to learn distributed representations of sensory data patterns with temporal spike signals. Notably, our experimental results on several pattern datasets demonstrate that the even-driven forward-forward (ED-FF) framework works well for training a dynamic recurrent spiking system capable of both classification and reconstruction

    Spatial computing in structured spiking neural networks with a robotic embodiment

    Get PDF
    One of the challenges of modern neuroscience is creating a "living computer" based on neural networks grown in vitro. Such an artificial device is supposed to perform neurocomputational tasks and interact with the environment when embodied in a robot. Recent studies have identified the most critical challenge, the search for a neural network architecture to implement associative learning. This work proposes a model of modular architecture with spiking neural networks connected by unidirectional couplings. We show that the model enables training a neuro-robot according to Pavlovian conditioning. The robot's performance in obstacle avoidance depends on the ratio of the weights in inter-network couplings. We show that besides STDP, critical factors for successful learning are synaptic and neuronal competitions. We use the recently discovered shortest path rule to implement the synaptic competition. This method is ready for experimental testing. Strong inhibitory couplings implement the neuronal competition in the subnetwork responsible for the unconditional response. Empirical testing of this approach requires a technique for growing neural networks with a given ratio of excitatory and inhibitory neurons not available yet. An alternative is building a hybrid system with in vitro neural networks coupled through hardware memristive connections

    Habituation based synaptic plasticity and organismic learning in a quantum perovskite

    Get PDF
    A central characteristic of living beings is the ability to learn from and respond to their environment leading to habit formation and decision making. This behavior, known as habituation, is universal among all forms of life with a central nervous system, and is also observed in single-cell organisms that do not possess a brain. Here, we report the discovery of habituation-based plasticity utilizing a perovskite quantum system by dynamical modulation of electron localization. Microscopic mechanisms and pathways that enable this organismic collective charge-lattice interaction are elucidated by first-principles theory, synchrotron investigations, ab initio molecular dynamics simulations, and in situ environmental breathing studies. We implement a learning algorithm inspired by the conductance relaxation behavior of perovskites that naturally incorporates habituation, and demonstrate learning to forget: A key feature of animal and human brains. Incorporating this elementary skill in learning boosts the capability of neural computing in a sequential, dynamic environment.United States. Army Research Office (Grant W911NF-16-1-0289)United States. Air Force Office of Scientific Research (Grant FA9550-16-1-0159)United States. Army Research Office (Grant W911NF-16-1-0042

    Spike Timing-Dependent Plasticity as the Origin of the Formation of Clustered Synaptic Efficacy Engrams

    Get PDF
    Synapse location, dendritic active properties and synaptic plasticity are all known to play some role in shaping the different input streams impinging onto a neuron. It remains unclear however, how the magnitude and spatial distribution of synaptic efficacies emerge from this interplay. Here, we investigate this interplay using a biophysically detailed neuron model of a reconstructed layer 2/3 pyramidal cell and spike timing-dependent plasticity (STDP). Specifically, we focus on the issue of how the efficacy of synapses contributed by different input streams are spatially represented in dendrites after STDP learning. We construct a simple feed forward network where a detailed model neuron receives synaptic inputs independently from multiple yet equally sized groups of afferent fibers with correlated activity, mimicking the spike activity from different neuronal populations encoding, for example, different sensory modalities. Interestingly, ensuing STDP learning, we observe that for all afferent groups, STDP leads to synaptic efficacies arranged into spatially segregated clusters effectively partitioning the dendritic tree. These segregated clusters possess a characteristic global organization in space, where they form a tessellation in which each group dominates mutually exclusive regions of the dendrite. Put simply, the dendritic imprint from different input streams left after STDP learning effectively forms what we term a “dendritic efficacy mosaic.” Furthermore, we show how variations of the inputs and STDP rule affect such an organization. Our model suggests that STDP may be an important mechanism for creating a clustered plasticity engram, which shapes how different input streams are spatially represented in dendrite

    Contributions to models of single neuron computation in striatum and cortex

    Get PDF
    A deeper understanding is required of how a single neuron utilizes its nonlinear subcellular devices to generate complex neuronal dynamics. Two compartmental models of cortex and striatum are accurately formulated and firmly grounded in the experimental reality of electrophysiology to address the questions: how striatal projection neurons implement location-dependent dendritic integration to carry out association-based computation and how cortical pyramidal neurons strategically exploit the type and location of synaptic contacts to enrich its computational capacities.Neuronale Zellen transformieren kontinuierliche Signale in diskrete Zeitserien von Aktionspotentialen und kodieren damit Perzeptionen und interne Zustände. Kompartiment-Modelle werden formuliert von Nervenzellen im Kortex und Striatum, die elektrophysiologisch fundiert sind, um spezifische Fragen zu adressieren: i) Inwiefern implementieren Projektionen vom Striatum ortsabhängige dendritische Integration, um Assoziationens-basierte Berechnungen zu realisieren? ii) Inwiefern nutzen kortikale Zellen den Typ und den Ort, um die durch sie realisierten Berechnungen zu optimieren

    Toward Reflective Spiking Neural Networks Exploiting Memristive Devices

    Get PDF
    The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations

    シナプスのダイナミクスと学習 : いかにして可塑性の生物学的メカニズムは、神経情報処理を可能とする効率的な学習則を実現するか。

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学客員教授 深井 朋樹, 東京大学教授 能瀬 聡直, 東京大学教授 岡田 真人, 東京大学准教授 久恒 辰博, 東京大学講師 牧野 泰才University of Tokyo(東京大学
    corecore