14 research outputs found

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    Get PDF
    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT)

    Spiking Neural Network With Distributed Plasticity Reproduces Cerebellar Learning in Eye Blink Conditioning Paradigms

    Get PDF
    In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. Methods: By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. Results: First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. Conclusions: We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes. Significance: This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical application

    A Novel GPU-Enabled Simulator for Large Scale Spiking Neural Networks, Journal of Telecommunications and Information Technology, 2016, nr 2

    Get PDF
    e understanding of the structural and dynamic complexity of neural networks is greatly facilitated by computer simulations. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper a framework for modeling and parallel simulation of biological-inspired large scale spiking neural networks on high-performance graphics processors is described. This tool is implemented in the OpenCL programming technology. It enables simulation study with three models: Integrate-andfire, Hodgkin-Huxley and Izhikevich neuron model. The results of extensive simulations are provided to illustrate the operation and performance of the presented software framework. The particular attention is focused on the computational speed-up factor

    A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies

    Get PDF
    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies

    A Digital Neuron Architecture for Flexible Spiking Neural Network Simulations

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2019. 2. 김장우.스파이킹 기반 신경망(Spiking Neural Networks, SNN)은 생명의 신경계의 작동 방식을 이해하기 위한 모델로서 신경과학 연구에서 많이 활용된다. SNN은 뉴런과 뉴런 사이의 상호작용을 뜻하는 스파이크가 가지는 시간적인 개념을 적용함으로써 생물체의 신경계를 모방한다. 뉴런의 내부 상태는 시간과 입력 스파이크에 따라 변화하며 내부 상태가 특정 조건을 만족하면 뉴런은 스파이크를 발생시킨다. 신경계를 구성하는 뉴런은 유형에 따라 각기 다른 행동을 보이기 때문에 SNN 시뮬레이션 프레임워크는 이러한 다양한 뉴런 행동을 지원하여야 한다. 몇몇 프레임워크는 임의의 뉴런 모델을 지원하기 위하여 다목적 프로세서에 의존하여 시뮬레이션을 진행하지만, 실행 시간과 에너지 소모 측면에서 비효율적이다. 이러한 비효율을 극복하기 위하여 일부 프레임워크는 시뮬레이션에 특화된 가속기를 제안한다. 하지만 기존 시뮬레이션 가속기는 모델 중심 설계의 한계로 인하여 제한된 개수의 뉴런 모델만을 지원할 수 있어 목표 SNN을 시뮬레이션하는데 한계가 있다. 본 논문에서 제안하는 Flexon은 다양한 뉴런 모델에서 공유하는 생물학적 공통 특징을 활용하여 효율적인 SNN 시뮬레이션을 가능하게 한다. 기존의 신경과학 연구에서 활용해 온 SNN을 분석하여 그들이 공유하는 특징을 파악하였고, 이를 기반으로 디지털 회로를 설계하였다. Flexon의 특징-중심 설계는 기존의 모델 중심 설계와 비교해 더 많은 수의 뉴런 행동을 지원할 수 있게 되었다. 추가로 이들 사이에 공유되는 기본 연산 단위를 구해 칩 크기를 감소시켰으며, 하드웨어의 복잡도를 높여 유연성을 한층 더 증가시켰다. Flexon과 공간 집약적 Flexon, HH-Flexon은 유연하고 효율적이며 기존 하드웨어에 쉽게 결합할 수 있다. TSMC 45 nm를 활용한 Flexon과 공간 집약적 Flexon의 프로토타이핑 결과는 12-뉴런 Flexon의 에너지 효율이 CPU와 GPU 대비 각 6,186배, 422배 향상되었으며, 칩 크기는 9.26 mm2으로 작았다. 72-뉴런 공간-집약적 Flexon의 성능 개선은 CPU와 GPU 대비 122.45배, 9.832배이며 칩 크기는 7.62 mm2이었다.Spiking Neural Networks (SNNs) are widely used in neuroscience research as models for understanding how the nervous system works. SNN mimics the nervous system of organisms, incorporating the concept of time into neurons and inter-neuron interactions called spikes. The internal state of a neuron changes with respect to time and input spikes. When the internal state satisfies a certain condition, the neuron generates a spike. As the neurons that make up the nervous system exhibit different behaviors depending on the type, the SNN simulation framework should support these various neuron behaviors. Some frameworks rely on a general-purpose processor to support any neuron models, but they are inefficient in terms of execution time and energy consumption. To overcome this inefficiency, some frameworks propose a specialized accelerator. However, existing simulation accelerators can only support a limited number of neuron models due to the limitations of a model-driven design, which limits the simulation of target SNNs. Flexon proposed in this paper enables efficient SNN simulations, utilizing biologically common features in various neuron models. We analyzed the SNNs used in existing neuroscience research and identified a set of biologically common features. Flexon is able to support a greater number of neuron behaviors compared to existing model-driven designs. In addition, computational primitives can be exploited to further reduce the chip area and the complexity of the hardware can be enhanced to further improve the flexibility of the hardware. Flexon, spatially-folded Flexon and HH-Flexon are flexible, efficient and can easily be integrated into existing hardware. The result of prototyping of Flexon and spatially-folded Flexon using TSMC 45 nm shows that the energy efficiency of 12-neuron Flexon is improved for 6,186x and 422x compared to CPU and GPU, respectively, in a small footprint 9.26 mm2. The performance improvement of the 72-neuron spatially-folded Flexon is 122.45x, 9.832x the CPU and GPU repectively, in a smaller footprint 7.62 mm2.제 1 장 서 론 1 제 1 절 연구의 배경 1 제 2 절 연구의 내용 2 제 2 장 스파이킹 기반 신경망 4 제 1 절 생물학적 뉴런 4 제 2 절 Hodgkin-Huxley 모델 5 제 3 절 Integrate-and-Fire 모델 7 제 4 절 스파이킹 기반 신경망의 단위 시간 시뮬레이션 8 제 3 장 한계 및 설계 목표 10 제 1 절 높은 뉴런 연산 오버헤드 10 제 2 절 디자인 목표 12 제 4 장 Flexon: 유연한 디지털 뉴런 13 제 1 절 생물학적 공통 특징 13 제 2 절 특징들을 이용한 다양한 뉴런 모델 시뮬레이션 20 제 5 장 공간 집약적 Flexon 25 제 1 절 공통 연산단위 25 제 2 절 조절 신호 26 제 6 장 평가 29 제 1 절 실험 준비 29 제 2 절 유연하고 낮은 오버헤드를 가진 디지털 뉴런 30 제 3 절 고효율 SNN 시뮬레이션 31 제 7 장 HH-Flexon 33 제 1 절 Hodgkin-Huxley 모델 식의 변형 33 제 2 절 하드웨어 변화 34 제 3 절 조절 신호 36 제 8 장 토론 및 고찰 39 제 1 절 Integrating to SNN Front-Ends 39 제 2 절 추가적인 뉴런 모델 39 제 9 장 관련 연구 41 제 10 장 결론 및 제언 42 참고 문헌 43 Abstract 48Maste

    Xolotl: An Intuitive and Approachable Neuron and Network Simulator for Research and Teaching

    Get PDF
    Conductance-based models of neurons are used extensively in computational neuroscience. Working with these models can be challenging due to their high dimensionality and large number of parameters. Here, we present a neuron and network simulator built on a novel automatic type system that binds object-oriented code written in C++ to objects in MATLAB. Our approach builds on the tradition of uniting the speed of languages like C++ with the ease-of-use and feature-set of scientific programming languages like MATLAB. Xolotl allows for the creation and manipulation of hierarchical models with components that are named and searchable, permitting intuitive high-level programmatic control over all parts of the model. The simulator's architecture allows for the interactive manipulation of any parameter in any model, and for visualizing the effects of changing that parameter immediately. Xolotl is fully featured with hundreds of ion channel models from the electrophysiological literature, and can be extended to include arbitrary conductances, synapses, and mechanisms. Several core features like bookmarking of parameters and automatic hashing of source code facilitate reproducible and auditable research. Its ease of use and rich visualization capabilities make it an attractive option in teaching environments. Finally, xolotl is written in a modular fashion, includes detailed tutorials and worked examples, and is freely available at https://github.com/sg-s/xolotl, enabling seamless integration into the workflows of other researchers

    Neuromodulation Based Control of Autonomous Robots on a Cloud Computing Platform

    Get PDF
    In recent years, the advancement of neurobiologically plausible models and computer networking has resulted in new ways of implementing control systems on robotic platforms. The work presents a control approach based on vertebrate neuromodulation and its implementation on autonomous robots in the open-source, open-access environment of robot operating system (ROS). A spiking neural network (SNN) is used to model the neuromodulatory function for generating context based behavioral responses of the robots to sensory input signals. The neural network incorporates three types of neurons- cholinergic and noradrenergic (ACh/NE) neurons for attention focusing and action selection, dopaminergic (DA) neurons for rewards- and curiosity-seeking, and serotonergic (5-HT) neurons for risk aversion behaviors. This model depicts neuron activity that is biologically realistic but computationally efficient to allow for large-scale simulation of thousands of neurons. The model is implemented using graphics processing units (GPUs) for parallel computing in real-time using the ROS environment. The model is implemented to study the risk-taking, risk-aversive, and distracted behaviors of the neuromodulated robots in single- and multi-robot configurations. The entire process is implemented in a cloud computing environment using ROS where the robots communicate wirelessly with the computing nodes through the on-board laptops. However, unlike the traditional neural networks, the neuromodulatory models do not need any pre-training. Instead, the robots learn from the sensory inputs and follow the behavioral facets of living organisms. The details of algorithm development, the experimental setup and implementation results under different conditions, in both single- and multi-robot configurations, are presented along with a discussion on the scope of further work
    corecore