30 research outputs found

    A Neural-Astrocytic Network Architecture: Astrocytic calcium waves modulate synchronous neuronal activity

    Full text link
    Understanding the role of astrocytes in brain computation is a nascent challenge, promising immense rewards, in terms of new neurobiological knowledge that can be translated into artificial intelligence. In our ongoing effort to identify principles endow-ing the astrocyte with unique functions in brain computation, and translate them into neural-astrocytic networks (NANs), we propose a biophysically realistic model of an astrocyte that preserves the experimentally observed spatial allocation of its distinct subcellular compartments. We show how our model may encode, and modu-late, the extent of synchronous neural activity via calcium waves that propagate intracellularly across the astrocytic compartments. This relationship between neural activity and astrocytic calcium waves has long been speculated but it is still lacking a mechanistic explanation. Our model suggests an astrocytic "calcium cascade" mechanism for neuronal synchronization, which may empower NANs by imposing periodic neural modulation known to reduce coding errors. By expanding our notions of information processing in astrocytes, our work aims to solidify a computational role for non-neuronal cells and incorporate them into artificial networks.Comment: International Conference on Neuromorphic Systems (ICONS) 201

    Astrocyte to spiking neuron communication using Networks-on-Chip ring topology

    Get PDF

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; GRC2014/049Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Assessing Self-Repair on FPGAs with Biologically Realistic Astrocyte-Neuron Networks

    Get PDF
    This paper presents a hardware based implementation of a biologically-faithful astrocyte-based selfrepairing mechanism for Spiking Neural Networks. Spiking Astrocyte-neuron Networks (SANNs) are a new computing paradigm which capture the key mechanisms of how the human brain performs repairs. Using SANN in hardware affords the potential for realizing computing architecture that can self-repair. This paper demonstrates that Spiking Astrocyte Neural Network (SANN) in hardware have a resilience to significant levels of faults. The key novelty of the paper resides in implementing an SANN on FPGAs using fixed-point representation and demonstrating graceful performance degradation to different levels of injected faults via its self-repair capability. A fixed-point implementation of astrocyte, neurons and tripartite synapses are presented and compared against previous hardware floating-point and Matlab software implementations of SANN. All results are obtained from the SANN FPGA implementation and show how the reduced fixedpoint representation can maintain the biologically-realistic repair capability

    Digital Implementation of Bio-Inspired Spiking Neuronal Networks

    Get PDF
    Spiking Neural Network as the third generation of artificial neural networks offers a promising solution for future computing, prosthesis, robotic and image processing applications. This thesis introduces digital designs and implementations of building blocks of a Spiking Neural Networks (SNNs) including neurons, learning rule, and small networks of neurons in the form of a Central Pattern Generator (CPG) which can be used as a module in control part of a bio-inspired robot. The circuits have been developed using Verilog Hardware Description Language (VHDL) and simulated through Modelsim and compiled and synthesised by Altera Qurtus Prime software for FPGA devices. Astrocyte as one of the brain cells controls synaptic activity between neurons by providing feedback to neurons. A novel digital hardware is proposed for neuron-synapseastrocyte network based on the biological Adaptive Exponential (AdEx) neuron and Postnov astrocyte cell model. The network can be used for implementation of large scale spiking neural networks. Synthesis of the designed circuits shows that the designed astrocyte circuit is able to imitate its biological model and regulate the synapse transmission, successfully. In addition, synthesis results confirms that the proposed design uses less than 1% of available resources of a VIRTEX II FPGA which saves up to 4.4% of FPGA resources in comparison to other designs. Learning rule is an essential part of every neural network including SNN. In an SNN, a special type of learning called Spike Timing Dependent Plasticity (STDP) is used to modify the connection strength between the spiking neurons. A pair-based STDP (PSTDP) works on pairs of spikes while a Triplet-based STDP (TSTDP) works on triplets of spikes to modify the synaptic weights. A low cost, accurate, and configurable digital architectures are proposed for PSTDP and TSTDP learning models. The proposed circuits have been compared with the state of the art methods like Lookup Table (LUT), and Piecewise Linear approximation (PWL). The circuits can be employed in a large-scale SNN implementation due to their compactness and configurability. Most of the neuron models represented in the literature are introduced to model the behavior of a single neuron. Since there is a large number of neurons in the brain, a population-based model can be helpful in better understanding of the brain functionality, implementing cognitive tasks and studying the brain diseases. Gaussian Wilson-Cowan model as one of the population-based models represents neuronal activity in the neocortex region of the brain. A digital model is proposed for the GaussianWilson-Cowan and examined in terms of dynamical and timing behavior. The evaluation indicates that the proposed model is able to generate the dynamical behavior as the original model is capable of. Digital architectures are implemented on an Altera FPGA board. Experimental results show that the proposed circuits take maximum 2% of the resources of a Stratix Altera board. In addition, static timing analysis indicates that the circuits can work in a maximum frequency of 244 MHz

    On-chip communication for neuro-glia networks

    Get PDF

    Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    Get PDF
    [Abstract] Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure鈥揂ctivity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron鈥揂strocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; GRC2014/049Galicia. Conseller铆a de Cultura, Educaci贸n e Ordenaci贸n Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028
    corecore