10,009 research outputs found

    ์—๋„ˆ์ง€ ํšจ์œจ์  ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2019. 2. ์ตœ๊ธฐ์˜.์ตœ๊ทผ ์‹ฌ์ธต ํ•™์Šต์€ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜, ์Œ์„ฑ ์ธ์‹ ๋ฐ ๊ฐ•ํ™” ํ•™์Šต๊ณผ ๊ฐ™์€ ์˜์—ญ์—์„œ ๋†€๋ผ์šด ์„ฑ๊ณผ๋ฅผ ๊ฑฐ๋‘๊ณ  ์žˆ๋‹ค. ์ตœ์ฒจ๋‹จ ์‹ฌ์ธต ์ธ๊ณต์‹ ๊ฒฝ๋ง ์ค‘ ์ผ๋ถ€๋Š” ์ด๋ฏธ ์ธ๊ฐ„์˜ ๋Šฅ๋ ฅ์„ ๋„˜์–ด์„  ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ์—„์ฒญ๋‚œ ์ˆ˜์˜ ๊ณ ์ •๋ฐ€ ๊ณ„์‚ฐ๊ณผ ์ˆ˜๋ฐฑ๋งŒ๊ฐœ์˜ ๋งค๊ฐœ ๋ณ€์ˆ˜๋ฅผ ์ด์šฉํ•˜๊ธฐ ์œ„ํ•œ ๋นˆ๋ฒˆํ•œ ๋ฉ”๋ชจ๋ฆฌ ์•ก์„ธ์Šค๋ฅผ ์ˆ˜๋ฐ˜ํ•œ๋‹ค. ์ด๋Š” ์—„์ฒญ๋‚œ ์นฉ ๊ณต๊ฐ„๊ณผ ์—๋„ˆ์ง€ ์†Œ๋ชจ ๋ฌธ์ œ๋ฅผ ์•ผ๊ธฐํ•˜์—ฌ ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์—์„œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒƒ์„ ์ œํ•œํ•˜๊ฒŒ ๋œ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ๋†’์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ๊ฐ–๋„๋ก ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ๋ฒˆ์งธ ํŒŒํŠธ์—์„œ๋Š” ๊ฐ€์ค‘ ์ŠคํŒŒ์ดํฌ๋ฅผ ์ด์šฉํ•˜์—ฌ ์งง์€ ์ถ”๋ก  ์‹œ๊ฐ„๊ณผ ์ ์€ ์—๋„ˆ์ง€ ์†Œ๋ชจ์˜ ์žฅ์ ์„ ๊ฐ–๋Š” ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃฌ๋‹ค. ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ์ธ๊ณต์‹ ๊ฒฝ๋ง์˜ ๋†’์€ ์—๋„ˆ์ง€ ์†Œ๋น„ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•œ ์œ ๋งํ•œ ๋Œ€์•ˆ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ ์‹ฌ์ธต ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ •ํ™•๋„ ์†์‹ค์—†์ด ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๋ฐœํ‘œ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค์€ rate coding์„ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ธด ์ถ”๋ก  ์‹œ๊ฐ„์„ ๊ฐ–๊ฒŒ ๋˜๊ณ  ์ด๊ฒƒ์ด ๋งŽ์€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์•ผ๊ธฐํ•˜๊ฒŒ ๋˜๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ์ด ํŒŒํŠธ์—์„œ๋Š” ํŽ˜์ด์ฆˆ์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ŠคํŒŒ์ดํฌ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ€์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ์ถ”๋ก  ์‹œ๊ฐ„์„ ํฌ๊ฒŒ ์ค„์ด๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. MNIST, SVHN, CIFAR-10, CIFAR-100 ๋ฐ์ดํ„ฐ์…‹์—์„œ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•œ ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์ด ๊ธฐ์กด ๋ฐฉ๋ฒ•์— ๋น„ํ•ด ํฐ ํญ์œผ๋กœ ์ถ”๋ก  ์‹œ๊ฐ„๊ณผ ์ŠคํŒŒ์ดํฌ ๋ฐœ์ƒ ๋นˆ๋„๋ฅผ ์ค„์—ฌ์„œ ๋ณด๋‹ค ์—๋„ˆ์ง€ ํšจ์œจ์ ์œผ๋กœ ๋™์ž‘ํ•จ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋‘๋ฒˆ์งธ ํŒŒํŠธ์—์„œ๋Š” ๊ณต์ • ๋ณ€์ด๊ฐ€ ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ๋™์ž‘ํ•˜๋Š” ๊ณ ์—๋„ˆ์ง€ํšจ์œจ ์•„๋‚ ๋กœ๊ทธ ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃจ๊ณ  ์žˆ๋‹ค. ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์•„๋‚ ๋กœ๊ทธ ํšŒ๋กœ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌํ˜„ํ•˜๋ฉด ๋†’์€ ๋ณ‘๋ ฌ์„ฑ๊ณผ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ํ•˜์ง€๋งŒ, ์•„๋‚ ๋กœ๊ทธ ์‹œ์Šคํ…œ์€ ๋…ธ์ด์ฆˆ์— ์ทจ์•ฝํ•œ ์ค‘๋Œ€ํ•œ ๊ฒฐ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋…ธ์ด์ฆˆ ์ค‘ ํ•˜๋‚˜๋กœ ๊ณต์ • ๋ณ€์ด๋ฅผ ๋“ค ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด๋Š” ์•„๋‚ ๋กœ๊ทธ ํšŒ๋กœ์˜ ์ ์ • ๋™์ž‘ ์ง€์ ์„ ๋ณ€ํ™”์‹œ์ผœ ์‹ฌ๊ฐํ•œ ์„ฑ๋Šฅ ์ €ํ•˜ ๋˜๋Š” ์˜ค๋™์ž‘์„ ์œ ๋ฐœํ•˜๋Š” ์›์ธ์ด๋‹ค. ์ด ํŒŒํŠธ์—์„œ๋Š” ReRAM์— ๊ธฐ๋ฐ˜ํ•œ ๊ณ ์—๋„ˆ์ง€ ํšจ์œจ ์•„๋‚ ๋กœ๊ทธ ์ด์ง„ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ๊ตฌํ˜„ํ•˜๊ณ , ๊ณต์ • ๋ณ€์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ํ™œ์„ฑ๋„ ์ผ์น˜ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•œ ๊ณต์ • ๋ณ€์ด ๋ณด์ƒ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ 1T1R ๊ตฌ์กฐ์˜ ReRAM ๋ฐฐ์—ด๊ณผ ์ฐจ๋™์ฆํญ๊ธฐ๋ฅผ ์ด์šฉํ•œ ๋‰ด๋Ÿฐ์„ ์ด์šฉํ•˜์—ฌ ๊ณ ๋ฐ€๋„ ์ง‘์ ๊ณผ ๊ณ ์—๋„ˆ์ง€ ํšจ์œจ ๋™์ž‘์ด ๊ฐ€๋Šฅํ•˜๊ฒŒ ๊ตฌ์„ฑ๋˜์—ˆ๋‹ค. ๋˜ํ•œ, ์•„๋‚ ๋กœ๊ทธ ๋‰ด๋Ÿฐ ํšŒ๋กœ์˜ ๊ณต์ • ๋ณ€์ด ์ทจ์•ฝ์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ด์ƒ์ ์ธ ๋‰ด๋Ÿฐ์˜ ํ™œ์„ฑ๋„์™€ ๋™์ผํ•œ ํ™œ์„ฑ๋„๋ฅผ ๊ฐ–๋„๋ก ๋‰ด๋Ÿฐ์˜ ๋ฐ”์ด์–ด์Šค๋ฅผ ์กฐ์ ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•œ๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ 32nm ๊ณต์ •์—์„œ ๊ตฌํ˜„๋œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ 3-sigma ์ง€์ ์—์„œ 50% ๋ฌธํ„ฑ ์ „์•• ๋ณ€์ด์™€ 15%์˜ ์ €ํ•ญ๊ฐ’ ๋ณ€์ด๊ฐ€ ์žˆ๋Š” ์ƒํ™ฉ์—์„œ๋„ MNIST์—์„œ 98.55%, CIFAR-10์—์„œ 89.63%์˜ ์ •ํ™•๋„๋ฅผ ๋‹ฌ์„ฑํ•˜์˜€์œผ๋ฉฐ, 970 TOPS/W์— ๋‹ฌํ•˜๋Š” ๋งค์šฐ ๋†’์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ๋‹ฌ์„ฑํ•˜์˜€๋‹ค.Recently, deep learning has shown astounding performances on specific tasks such as image classification, speech recognition, and reinforcement learning. Some of the state-of-the-art deep neural networks have already gone over humans ability. However, neural networks involve tremendous number of high precision computations and frequent off-chip memory accesses with millions of parameters. It incurs problems of large area and exploding energy consumption, which hinder neural networks from being exploited in embedded systems. To cope with the problem, techniques for designing energy efficient neural networks are proposed. The first part of this dissertation addresses the design of spiking neural networks with weighted spikes which has advantages of shorter inference latency and smaller energy consumption compared to the conventional spiking neural networks. Spiking neural networks are being regarded as one of the promising alternative techniques to overcome the high energy costs of artificial neural networks. It is supported by many researches showing that a deep convolutional neural network can be converted into a spiking neural network with near zero accuracy loss. However, the advantage on energy consumption of spiking neural networks comes at a cost of long classification latency due to the use of Poisson-distributed spike trains (rate coding), especially in deep networks. We propose to use weighted spikes, which can greatly reduce the latency by assigning a different weight to a spike depending on which time phase it belongs. Experimental results on MNIST, SVHN, CIFAR-10, and CIFAR-100 show that the proposed spiking neural networks with weighted spikes achieve significant reduction in classification latency and number of spikes, which leads to faster and more energy-efficient spiking neural networks than the conventional spiking neural networks with rate coding. We also show that one of the state-of-the-art networks the deep residual network can be converted into spiking neural network without accuracy loss. The second part of this dissertation focuses on the design of highly energy-efficient analog neural networks in the presence of variations. Analog hardware accelerators for deep neural networks have taken center stage in the aspect of high parallelism and energy efficiency. However, a critical weakness of the analog hardware systems is vulnerability to noise. One of the biggest noise sources is a process variation. It is a big obstacle to using analog circuits since the variation shifts various parameters of analog circuits from the correct operating points, which causes severe performance degradation or even malfunction. To achieve high energy efficiency with analog neural networks, we propose resistive random access memory (ReRAM) based analog implementation of binarized neural networks (BNNs) with a novel variation compensation technique through activation matching (VCAM). The proposed architecture consists of 1-transistor-1-resistor (1T1R) structured ReRAM synaptic arrays and differential amplifier based neurons, which leads to high-density integration and energy efficiency. To cope with the vulnerability of analog neurons due to process variation, the biases of all neurons are adjusted in the direction that matches average output activation of ideal neurons without variation. The technique effectively restores the classification accuracy degraded by the variation. Experimental results on 32nm technology show that the proposed architecture achieves the classification accuracy of 98.55% on MNIST and 89.63% on CIFAR-10 in the presence of 50% threshold voltage variation and 15% resistance variation at 3-sigma point. It also achieves 970 TOPS/W energy efficiency with MLP on MNIST.1 Introduction 1 1.1 Deep Neural Networks with Weighted Spikes . . . . . . . . . . . . . 2 1.2 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks . . . . . . . . . . . . . . . . . . . . . 5 2 Background 8 2.1 Spiking neural network . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Spiking neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Rate coding in SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Binarized neural networks . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Resistive random access memory . . . . . . . . . . . . . . . . . . . . 18 3 RelatedWork 22 3.1 Training SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 SNNs with various spike coding schemes . . . . . . . . . . . . . . . 25 3.3 BNN implementations . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Deep Neural Networks withWeighted Spikes 33 4.1 SNN with weighted spikes . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Weighted spikes . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.2 Spiking neuron model for weighted spikes . . . . . . . . . . . 35 4.1.3 Noise spike . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.4 Approximation of the ReLU activation . . . . . . . . . . . . 39 4.1.5 ANN-to-SNN conversion . . . . . . . . . . . . . . . . . . . . 41 4.2 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.1 Skipping initial input currents in the output layer . . . . . . . 45 4.2.2 The number of phases in a period . . . . . . . . . . . . . . . 47 4.2.3 Accuracy-energy trade-off by early decision . . . . . . . . . . 50 4.2.4 Consideration on hardware implementation . . . . . . . . . . 52 4.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4.1 Comparison between SNN-RC and SNN-WS . . . . . . . . . 56 4.4.2 Trade-off by early decision . . . . . . . . . . . . . . . . . . . 64 4.4.3 Comparison with other algorithms . . . . . . . . . . . . . . . 67 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks 71 5.1 Modification of Binarized Neural Network . . . . . . . . . . . . . . . 72 5.1.1 Binarized Neural Network . . . . . . . . . . . . . . . . . . . 72 5.1.2 Use of 0 and 1 Activations . . . . . . . . . . . . . . . . . . . 72 5.1.3 Removal of Batch Normalization Layer . . . . . . . . . . . . 73 5.2 Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2.1 ReRAM Synaptic Array . . . . . . . . . . . . . . . . . . . . 75 5.2.2 Neuron Circuit . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.3 Issues with Neuron Circuit . . . . . . . . . . . . . . . . . . . 82 5.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.1 Variation Modeling . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.2 Impact of VT Variation . . . . . . . . . . . . . . . . . . . . . 87 5.3.3 Variation Compensation Techniques . . . . . . . . . . . . . . 88 5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.2 Accuracy of the Modified BNN Algorithm . . . . . . . . . . 94 5.4.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . 95 5.4.4 Performance Comparison . . . . . . . . . . . . . . . . . . . . 99 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6 Conclusion 102Docto

    Fast and Efficient Information Transmission with Burst Spikes in Deep Spiking Neural Networks

    Full text link
    The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energy efficient computing capability. Recently, conversion of a trained deep neural network to an SNN has improved the accuracy of deep SNNs. However, most of the previous studies have not achieved satisfactory results in terms of inference speed and energy efficiency. In this paper, we propose a fast and energy-efficient information transmission method with burst spikes and hybrid neural coding scheme in deep SNNs. Our experimental results showed the proposed methods can improve inference energy efficiency and shorten the latency.Comment: Accepted to DAC 201

    Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and Knowledge Distillation

    Full text link
    Conventional automatic speech recognition (ASR) systems trained from frame-level alignments can easily leverage posterior fusion to improve ASR accuracy and build a better single model with knowledge distillation. End-to-end ASR systems trained using the Connectionist Temporal Classification (CTC) loss do not require frame-level alignment and hence simplify model training. However, sparse and arbitrary posterior spike timings from CTC models pose a new set of challenges in posterior fusion from multiple models and knowledge distillation between CTC models. We propose a method to train a CTC model so that its spike timings are guided to align with those of a pre-trained guiding CTC model. As a result, all models that share the same guiding model have aligned spike timings. We show the advantage of our method in various scenarios including posterior fusion of CTC models and knowledge distillation between CTC models with different architectures. With the 300-hour Switchboard training data, the single word CTC model distilled from multiple models improved the word error rates to 13.7%/23.1% from 14.9%/24.1% on the Hub5 2000 Switchboard/CallHome test sets without using any data augmentation, language model, or complex decoder.Comment: Accepted to Interspeech 201

    Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

    Full text link
    Spiking neural networks (SNN) are artificial computational models that have been inspired by the brain's ability to naturally encode and process information in the time domain. The added temporal dimension is believed to render them more computationally efficient than the conventional artificial neural networks, though their full computational capabilities are yet to be explored. Recently, computational memory architectures based on non-volatile memory crossbar arrays have shown great promise to implement parallel computations in artificial and spiking neural networks. In this work, we experimentally demonstrate for the first time, the feasibility to realize high-performance event-driven in-situ supervised learning systems using nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize audio signals of alphabets encoded using spikes in the time domain and to generate spike trains at precise time instances to represent the pixel intensities of their corresponding images. Moreover, with a statistical model capturing the experimental behavior of the devices, we investigate architectural and systems-level solutions for improving the training and inference performance of our computational memory-based system. Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems

    Efficient Computation in Adaptive Artificial Spiking Neural Networks

    Get PDF
    Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using binary spikes. While artificial Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons, the current performance is far from that of deep ANNs on hard benchmarks and these SNNs use much higher firing rates compared to their biological counterparts, limiting their efficiency. Here we show how spiking neurons that employ an efficient form of neural coding can be used to construct SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on important benchmarks, while requiring much lower average firing rates. For this, we use spike-time coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up to an order of magnitude fewer spikes compared to previous SNNs. Adaptive spike-time coding additionally allows for the dynamic control of neural coding precision: we show how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally extends to other forms of attention. AdSNNs thus hold promise as a novel and efficient model for neural computation that naturally fits to temporally continuous and asynchronous applications
    • โ€ฆ
    corecore