212 research outputs found

    Assessing Self-Repair on FPGAs with Biologically Realistic Astrocyte-Neuron Networks

    Get PDF
    This paper presents a hardware based implementation of a biologically-faithful astrocyte-based selfrepairing mechanism for Spiking Neural Networks. Spiking Astrocyte-neuron Networks (SANNs) are a new computing paradigm which capture the key mechanisms of how the human brain performs repairs. Using SANN in hardware affords the potential for realizing computing architecture that can self-repair. This paper demonstrates that Spiking Astrocyte Neural Network (SANN) in hardware have a resilience to significant levels of faults. The key novelty of the paper resides in implementing an SANN on FPGAs using fixed-point representation and demonstrating graceful performance degradation to different levels of injected faults via its self-repair capability. A fixed-point implementation of astrocyte, neurons and tripartite synapses are presented and compared against previous hardware floating-point and Matlab software implementations of SANN. All results are obtained from the SANN FPGA implementation and show how the reduced fixedpoint representation can maintain the biologically-realistic repair capability

    Robust Engineering of Dynamic Structures in Complex Networks

    Get PDF
    Populations of nearly identical dynamical systems are ubiquitous in natural and engineered systems, in which each unit plays a crucial role in determining the functioning of the ensemble. Robust and optimal control of such large collections of dynamical units remains a grand challenge, especially, when these units interact and form a complex network. Motivated by compelling practical problems in power systems, neural engineering and quantum control, where individual units often have to work in tandem to achieve a desired dynamic behavior, e.g., maintaining synchronization of generators in a power grid or conveying information in a neuronal network; in this dissertation, we focus on developing novel analytical tools and optimal control policies for large-scale ensembles and networks. To this end, we first formulate and solve an optimal tracking control problem for bilinear systems. We developed an iterative algorithm that synthesizes the optimal control input by solving a sequence of state-dependent differential equations that characterize the optimal solution. This iterative scheme is then extended to treat isolated population or networked systems. We demonstrate the robustness and versatility of the iterative control algorithm through diverse applications from different fields, involving nuclear magnetic resonance (NMR) spectroscopy and imaging (MRI), electrochemistry, neuroscience, and neural engineering. For example, we design synchronization controls for optimal manipulation of spatiotemporal spike patterns in neuron ensembles. Such a task plays an important role in neural systems. Furthermore, we show that the formation of such spatiotemporal patterns is restricted when the network of neurons is only partially controllable. In neural circuitry, for instance, loss of controllability could imply loss of neural functions. In addition, we employ the phase reduction theory to leverage the development of novel control paradigms for cyclic deferrable loads, e.g., air conditioners, that are used to support grid stability through demand response (DR) programs. More importantly, we introduce novel theoretical tools for evaluating DR capacity and bandwidth. We also study pinning control of complex networks, where we establish a control-theoretic approach to identifying the most influential nodes in both undirected and directed complex networks. Such pinning strategies have extensive practical implications, e.g., identifying the most influential spreaders in epidemic and social networks, and lead to the discovery of degenerate networks, where the most influential node relocates depending on the coupling strength. This phenomenon had not been discovered until our recent study

    Learning Spiking Neural Systems with the Event-Driven Forward-Forward Process

    Full text link
    We develop a novel credit assignment algorithm for information processing with spiking neurons without requiring feedback synapses. Specifically, we propose an event-driven generalization of the forward-forward and the predictive forward-forward learning processes for a spiking neural system that iteratively processes sensory input over a stimulus window. As a result, the recurrent circuit computes the membrane potential of each neuron in each layer as a function of local bottom-up, top-down, and lateral signals, facilitating a dynamic, layer-wise parallel form of neural computation. Unlike spiking neural coding, which relies on feedback synapses to adjust neural electrical activity, our model operates purely online and forward in time, offering a promising way to learn distributed representations of sensory data patterns with temporal spike signals. Notably, our experimental results on several pattern datasets demonstrate that the even-driven forward-forward (ED-FF) framework works well for training a dynamic recurrent spiking system capable of both classification and reconstruction

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Dynamical model for the neural activity of singing Serinus canaria

    Get PDF
    Vocal production in songbirds is a key topic regarding the motor control of a complex, learned behavior. Birdsong is the result of the interaction between the activity of an intricate set of neural nuclei specifically dedicated to song production and learning (known as the "song system"), the respiratory system and the vocal organ. These systems interact and give rise to precise biomechanical motor gestures which result in song production. Telencephalic neural nuclei play a key role in the production of motor commands that drive the periphery, and while several attempts have been made to understand their coding strategy, difficulties arise when trying to understand neural activity in the frame of the song system as a whole. In this work, we report neural additive models embedded in an architecture compatible with the song system to provide a tool to reduce the dimensionality of the problem by considering the global activity of the units in each neural nucleus. This model is capable of generating outputs compatible with measurements of air sac pressure during song production in canaries (Serinus canaria). In this work, we show that the activity in a telencephalic nucleus required by the model to reproduce the observed respiratory gestures is compatible with electrophysiological recordings of single neuron activity in freely behaving animals.Fil: Herbert, Cecilia Thomsett. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Boari, Santiago. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Mindlin, Bernardo Gabriel. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Amador, Ana. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentin

    Exact Gradient Computation for Spiking Neural Networks Through Forward Propagation

    Full text link
    Spiking neural networks (SNN) have recently emerged as alternatives to traditional neural networks, owing to energy efficiency benefits and capacity to better capture biological neuronal mechanisms. However, the classic backpropagation algorithm for training traditional networks has been notoriously difficult to apply to SNN due to the hard-thresholding and discontinuities at spike times. Therefore, a large majority of prior work believes exact gradients for SNN w.r.t. their weights do not exist and has focused on approximation methods to produce surrogate gradients. In this paper, (1) by applying the implicit function theorem to SNN at the discrete spike times, we prove that, albeit being non-differentiable in time, SNNs have well-defined gradients w.r.t. their weights, and (2) we propose a novel training algorithm, called \emph{forward propagation} (FP), that computes exact gradients for SNN. FP exploits the causality structure between the spikes and allows us to parallelize computation forward in time. It can be used with other algorithms that simulate the forward pass, and it also provides insights on why other related algorithms such as Hebbian learning and also recently-proposed surrogate gradient methods may perform well
    corecore