48 research outputs found

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    Уніфікована модель біологічно подібного штучного нейрону з підкріпленням

    Get PDF
    Статья посвящена вопросу создания алгоритма функционирования биологически подобного искусственного нейрона как информационной единицы искусственной нейронной сети. На данный момент механизмы функционирования биологических нейронов (память, формирование и передача импульсов, мультиплексирование информации и т.д.) хорошо изучены нейробиологами, но при этом эти данные мало используются при создании вычислительного аппарата искусственных нейронных сетей. Объектом исследования является совокупность конструктивных и функциональных составляющих биологического нейрона, обеспечивающих возможность его функционирования в качестве элемента биологической нейронной сети. Предметом исследования является модель биологически подобного искусственного нейрона с подкреплением, как функциональной единицы искусственной нейронной сети. В результате проведенного исследования удалось объединить современные наработки нейробиологии, что позволило при составлении алгоритма функционирования искусственного нейрона учесть биологические компоненты нейрона как информационной единицы: спайкинговую природу передачи импульсов, синаптическую пластичность, продуцирование молекул нейромедиатора и их расходование и т.д. Предложенный алгоритм максимально приближен к биологическому аналогу и может быть использован для создания биологически подобных искусственных нейронных сетей.The article focuses on the creation of the algorithm of functioning of the biologically similar artificial neuron as an information subunit of an artificial neural network. At the moment, the mechanisms of functioning of biological neurons (memory formation, transmission of impulses, multiplexing of information, etc.) are well studied by the neurobiology, but these data are not in active use when creating a computing device of artificial neural networks. The object of the research is a set of structural and functional elements of a biological neuron that enable it to function as a part of the biological neural network. The subject of the research is biologi-cally similar model of artificial neuron with reinforcement as a functional subunit of an artificial neural network. The study allowed combining modern neuroscience achievements, which allowed to take into account the biological components of the neuron as an information item (such as spiking nature of impulse transmission, synaptic plasticity, the production of the neurotrans-mitter molecules and their spending, etc.) during composing the algorithm of functioning of the artificial neuron. The proposed algorithm is close to biological counterparts, and can be used to create biologically similar artificial neural networks.Стаття присвячена питанню створення алгоритму функціонування біологічно подібного штучного нейрону як інформаційної одиниці штучної нейронної мережі. На даний момент механізми функціонування біологічних нейронів (пам’ять, формування і передача імпульсів, мультиплексування інформації тощо) добре вивчені нейробіологами, але при цьому ці дані практично не використовуються при створенні обчислювального апарату штучних нейронних мереж. Об’єктом дослідження є сукупність конструктивних і функціональних складових біологічного нейрону, що забезпечують можливість його функціонування як елементу біологічної нейронної мережі. Предметом дослідження є модель біологічно подібного штучного нейрону з підкріпленням, як функціональної одиниці штучної нейронної мережі. У результаті проведеного дослідження вдалося об’єднати сучасні напрацювання нейробіології, що дозволило при складанні алгоритму функціонування штучного нейрону врахувати біологічні компоненти нейрону як інформаційної одиниці: спайкінгову природу передачі імпульсів, синаптичну пластичність, продукування молекул нейромедіатора та їх витрачання тощо. Запропонований алгоритм є максимально наближеним до біологічного аналога і може бути використаний для створення біологічно подібних штучних нейронних мереж

    Applications of Emerging Memory in Modern Computer Systems: Storage and Acceleration

    Get PDF
    In recent year, heterogeneous architecture emerges as a promising technology to conquer the constraints in homogeneous multi-core architecture, such as supply voltage scaling, off-chip communication bandwidth, and application parallelism. Various forms of accelerators, e.g., GPU and ASIC, have been extensively studied for their tradeoffs between computation efficiency and adaptivity. But with the increasing demand of the capacity and the technology scaling, accelerators also face limitations on cost-efficiency due to the use of traditional memory technologies and architecture design. Emerging memory has become a promising memory technology to inspire some new designs by replacing traditional memory technologies in modern computer system. In this dissertation, I will first summarize my research on the application of Spin-transfer torque random access memory (STT-RAM) in GPU memory hierarchy, which offers simple cell structure and non-volatility to enable much smaller cell area than SRAM and almost zero standby power. Then I will introduce my research about memristor implementation as the computation component in the neuromorphic computing accelerator, which has the similarity between the programmable resistance state of memristors and the variable synaptic strengths of biological synapses to simplify the realization of neural network model. At last, a dedicated interconnection network design for multicore neuromorphic computing system will be presented to reduce the prominent average latency and power consumption brought by NoC in a large size neuromorphic computing system

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates
    corecore