3 research outputs found

    Path Delay Measurement with Correction for Temperature And Voltage Variations

    Get PDF
    Path delay measurement in field is useful for not only detection of delay-related faults but also prediction of aging-induced delay faults. In order to utilize the delay measurement results for fault detection and fault prediction, the measured delay must be corrected because the circuit delay is varied in field due to environment such as temperature or voltage variations. This paper proposes a method of BIST-based path delay measurement in which the influence of environmental variations is eliminated. An on-chip sensor measures temperature and voltage during delay measurement. Using information from the temperature and voltage sensor and pre-computed temperature and voltage sensitivities of the circuit delay, the measured delay value is corrected to a delay value that would be obtained under a fixed temperature and voltage. Evaluation for a test chip with 65nm CMOS technology implementing the proposed method shows that errors of measured delays brought by environmental variations could be reduced from 2419 to 211 ps in the range of 30 to 80 ยฐC and 1.05 to 1.35 V. This paper also discusses application and feasibility for degradation detection of the proposed method.International Test Conference in Asia (ITC-Asia 2020), September 23-25, 2020, Taipei City, Taiwan๏ผˆ็พๅœฐใŠใ‚ˆใณใ‚ชใƒณใƒฉใ‚คใƒณใง้–‹ๅ‚ฌ

    ์—๋„ˆ์ง€ ํšจ์œจ์  ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2019. 2. ์ตœ๊ธฐ์˜.์ตœ๊ทผ ์‹ฌ์ธต ํ•™์Šต์€ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜, ์Œ์„ฑ ์ธ์‹ ๋ฐ ๊ฐ•ํ™” ํ•™์Šต๊ณผ ๊ฐ™์€ ์˜์—ญ์—์„œ ๋†€๋ผ์šด ์„ฑ๊ณผ๋ฅผ ๊ฑฐ๋‘๊ณ  ์žˆ๋‹ค. ์ตœ์ฒจ๋‹จ ์‹ฌ์ธต ์ธ๊ณต์‹ ๊ฒฝ๋ง ์ค‘ ์ผ๋ถ€๋Š” ์ด๋ฏธ ์ธ๊ฐ„์˜ ๋Šฅ๋ ฅ์„ ๋„˜์–ด์„  ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ์—„์ฒญ๋‚œ ์ˆ˜์˜ ๊ณ ์ •๋ฐ€ ๊ณ„์‚ฐ๊ณผ ์ˆ˜๋ฐฑ๋งŒ๊ฐœ์˜ ๋งค๊ฐœ ๋ณ€์ˆ˜๋ฅผ ์ด์šฉํ•˜๊ธฐ ์œ„ํ•œ ๋นˆ๋ฒˆํ•œ ๋ฉ”๋ชจ๋ฆฌ ์•ก์„ธ์Šค๋ฅผ ์ˆ˜๋ฐ˜ํ•œ๋‹ค. ์ด๋Š” ์—„์ฒญ๋‚œ ์นฉ ๊ณต๊ฐ„๊ณผ ์—๋„ˆ์ง€ ์†Œ๋ชจ ๋ฌธ์ œ๋ฅผ ์•ผ๊ธฐํ•˜์—ฌ ์ž„๋ฒ ๋””๋“œ ์‹œ์Šคํ…œ์—์„œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒƒ์„ ์ œํ•œํ•˜๊ฒŒ ๋œ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ๋†’์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ๊ฐ–๋„๋ก ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ๋ฒˆ์งธ ํŒŒํŠธ์—์„œ๋Š” ๊ฐ€์ค‘ ์ŠคํŒŒ์ดํฌ๋ฅผ ์ด์šฉํ•˜์—ฌ ์งง์€ ์ถ”๋ก  ์‹œ๊ฐ„๊ณผ ์ ์€ ์—๋„ˆ์ง€ ์†Œ๋ชจ์˜ ์žฅ์ ์„ ๊ฐ–๋Š” ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃฌ๋‹ค. ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ ์ธ๊ณต์‹ ๊ฒฝ๋ง์˜ ๋†’์€ ์—๋„ˆ์ง€ ์†Œ๋น„ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•œ ์œ ๋งํ•œ ๋Œ€์•ˆ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๊ธฐ์กด ์—ฐ๊ตฌ์—์„œ ์‹ฌ์ธต ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ •ํ™•๋„ ์†์‹ค์—†์ด ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ๋ฐœํ‘œ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค์€ rate coding์„ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ธด ์ถ”๋ก  ์‹œ๊ฐ„์„ ๊ฐ–๊ฒŒ ๋˜๊ณ  ์ด๊ฒƒ์ด ๋งŽ์€ ์—๋„ˆ์ง€ ์†Œ๋ชจ๋ฅผ ์•ผ๊ธฐํ•˜๊ฒŒ ๋˜๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. ์ด ํŒŒํŠธ์—์„œ๋Š” ํŽ˜์ด์ฆˆ์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ŠคํŒŒ์ดํฌ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ€์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ์ถ”๋ก  ์‹œ๊ฐ„์„ ํฌ๊ฒŒ ์ค„์ด๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. MNIST, SVHN, CIFAR-10, CIFAR-100 ๋ฐ์ดํ„ฐ์…‹์—์„œ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์ด์šฉํ•œ ์ŠคํŒŒ์ดํ‚น ์ธ๊ณต์‹ ๊ฒฝ๋ง์ด ๊ธฐ์กด ๋ฐฉ๋ฒ•์— ๋น„ํ•ด ํฐ ํญ์œผ๋กœ ์ถ”๋ก  ์‹œ๊ฐ„๊ณผ ์ŠคํŒŒ์ดํฌ ๋ฐœ์ƒ ๋นˆ๋„๋ฅผ ์ค„์—ฌ์„œ ๋ณด๋‹ค ์—๋„ˆ์ง€ ํšจ์œจ์ ์œผ๋กœ ๋™์ž‘ํ•จ์„ ๋ณด์—ฌ์ค€๋‹ค. ๋‘๋ฒˆ์งธ ํŒŒํŠธ์—์„œ๋Š” ๊ณต์ • ๋ณ€์ด๊ฐ€ ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ๋™์ž‘ํ•˜๋Š” ๊ณ ์—๋„ˆ์ง€ํšจ์œจ ์•„๋‚ ๋กœ๊ทธ ์ธ๊ณต์‹ ๊ฒฝ๋ง ์„ค๊ณ„ ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃจ๊ณ  ์žˆ๋‹ค. ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์•„๋‚ ๋กœ๊ทธ ํšŒ๋กœ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌํ˜„ํ•˜๋ฉด ๋†’์€ ๋ณ‘๋ ฌ์„ฑ๊ณผ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ํ•˜์ง€๋งŒ, ์•„๋‚ ๋กœ๊ทธ ์‹œ์Šคํ…œ์€ ๋…ธ์ด์ฆˆ์— ์ทจ์•ฝํ•œ ์ค‘๋Œ€ํ•œ ๊ฒฐ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋…ธ์ด์ฆˆ ์ค‘ ํ•˜๋‚˜๋กœ ๊ณต์ • ๋ณ€์ด๋ฅผ ๋“ค ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด๋Š” ์•„๋‚ ๋กœ๊ทธ ํšŒ๋กœ์˜ ์ ์ • ๋™์ž‘ ์ง€์ ์„ ๋ณ€ํ™”์‹œ์ผœ ์‹ฌ๊ฐํ•œ ์„ฑ๋Šฅ ์ €ํ•˜ ๋˜๋Š” ์˜ค๋™์ž‘์„ ์œ ๋ฐœํ•˜๋Š” ์›์ธ์ด๋‹ค. ์ด ํŒŒํŠธ์—์„œ๋Š” ReRAM์— ๊ธฐ๋ฐ˜ํ•œ ๊ณ ์—๋„ˆ์ง€ ํšจ์œจ ์•„๋‚ ๋กœ๊ทธ ์ด์ง„ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ๊ตฌํ˜„ํ•˜๊ณ , ๊ณต์ • ๋ณ€์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ํ™œ์„ฑ๋„ ์ผ์น˜ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•œ ๊ณต์ • ๋ณ€์ด ๋ณด์ƒ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ 1T1R ๊ตฌ์กฐ์˜ ReRAM ๋ฐฐ์—ด๊ณผ ์ฐจ๋™์ฆํญ๊ธฐ๋ฅผ ์ด์šฉํ•œ ๋‰ด๋Ÿฐ์„ ์ด์šฉํ•˜์—ฌ ๊ณ ๋ฐ€๋„ ์ง‘์ ๊ณผ ๊ณ ์—๋„ˆ์ง€ ํšจ์œจ ๋™์ž‘์ด ๊ฐ€๋Šฅํ•˜๊ฒŒ ๊ตฌ์„ฑ๋˜์—ˆ๋‹ค. ๋˜ํ•œ, ์•„๋‚ ๋กœ๊ทธ ๋‰ด๋Ÿฐ ํšŒ๋กœ์˜ ๊ณต์ • ๋ณ€์ด ์ทจ์•ฝ์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ด์ƒ์ ์ธ ๋‰ด๋Ÿฐ์˜ ํ™œ์„ฑ๋„์™€ ๋™์ผํ•œ ํ™œ์„ฑ๋„๋ฅผ ๊ฐ–๋„๋ก ๋‰ด๋Ÿฐ์˜ ๋ฐ”์ด์–ด์Šค๋ฅผ ์กฐ์ ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•œ๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ 32nm ๊ณต์ •์—์„œ ๊ตฌํ˜„๋œ ์ธ๊ณต์‹ ๊ฒฝ๋ง์€ 3-sigma ์ง€์ ์—์„œ 50% ๋ฌธํ„ฑ ์ „์•• ๋ณ€์ด์™€ 15%์˜ ์ €ํ•ญ๊ฐ’ ๋ณ€์ด๊ฐ€ ์žˆ๋Š” ์ƒํ™ฉ์—์„œ๋„ MNIST์—์„œ 98.55%, CIFAR-10์—์„œ 89.63%์˜ ์ •ํ™•๋„๋ฅผ ๋‹ฌ์„ฑํ•˜์˜€์œผ๋ฉฐ, 970 TOPS/W์— ๋‹ฌํ•˜๋Š” ๋งค์šฐ ๋†’์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ๋‹ฌ์„ฑํ•˜์˜€๋‹ค.Recently, deep learning has shown astounding performances on specific tasks such as image classification, speech recognition, and reinforcement learning. Some of the state-of-the-art deep neural networks have already gone over humans ability. However, neural networks involve tremendous number of high precision computations and frequent off-chip memory accesses with millions of parameters. It incurs problems of large area and exploding energy consumption, which hinder neural networks from being exploited in embedded systems. To cope with the problem, techniques for designing energy efficient neural networks are proposed. The first part of this dissertation addresses the design of spiking neural networks with weighted spikes which has advantages of shorter inference latency and smaller energy consumption compared to the conventional spiking neural networks. Spiking neural networks are being regarded as one of the promising alternative techniques to overcome the high energy costs of artificial neural networks. It is supported by many researches showing that a deep convolutional neural network can be converted into a spiking neural network with near zero accuracy loss. However, the advantage on energy consumption of spiking neural networks comes at a cost of long classification latency due to the use of Poisson-distributed spike trains (rate coding), especially in deep networks. We propose to use weighted spikes, which can greatly reduce the latency by assigning a different weight to a spike depending on which time phase it belongs. Experimental results on MNIST, SVHN, CIFAR-10, and CIFAR-100 show that the proposed spiking neural networks with weighted spikes achieve significant reduction in classification latency and number of spikes, which leads to faster and more energy-efficient spiking neural networks than the conventional spiking neural networks with rate coding. We also show that one of the state-of-the-art networks the deep residual network can be converted into spiking neural network without accuracy loss. The second part of this dissertation focuses on the design of highly energy-efficient analog neural networks in the presence of variations. Analog hardware accelerators for deep neural networks have taken center stage in the aspect of high parallelism and energy efficiency. However, a critical weakness of the analog hardware systems is vulnerability to noise. One of the biggest noise sources is a process variation. It is a big obstacle to using analog circuits since the variation shifts various parameters of analog circuits from the correct operating points, which causes severe performance degradation or even malfunction. To achieve high energy efficiency with analog neural networks, we propose resistive random access memory (ReRAM) based analog implementation of binarized neural networks (BNNs) with a novel variation compensation technique through activation matching (VCAM). The proposed architecture consists of 1-transistor-1-resistor (1T1R) structured ReRAM synaptic arrays and differential amplifier based neurons, which leads to high-density integration and energy efficiency. To cope with the vulnerability of analog neurons due to process variation, the biases of all neurons are adjusted in the direction that matches average output activation of ideal neurons without variation. The technique effectively restores the classification accuracy degraded by the variation. Experimental results on 32nm technology show that the proposed architecture achieves the classification accuracy of 98.55% on MNIST and 89.63% on CIFAR-10 in the presence of 50% threshold voltage variation and 15% resistance variation at 3-sigma point. It also achieves 970 TOPS/W energy efficiency with MLP on MNIST.1 Introduction 1 1.1 Deep Neural Networks with Weighted Spikes . . . . . . . . . . . . . 2 1.2 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks . . . . . . . . . . . . . . . . . . . . . 5 2 Background 8 2.1 Spiking neural network . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Spiking neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Rate coding in SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Binarized neural networks . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Resistive random access memory . . . . . . . . . . . . . . . . . . . . 18 3 RelatedWork 22 3.1 Training SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 SNNs with various spike coding schemes . . . . . . . . . . . . . . . 25 3.3 BNN implementations . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Deep Neural Networks withWeighted Spikes 33 4.1 SNN with weighted spikes . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Weighted spikes . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.2 Spiking neuron model for weighted spikes . . . . . . . . . . . 35 4.1.3 Noise spike . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.4 Approximation of the ReLU activation . . . . . . . . . . . . 39 4.1.5 ANN-to-SNN conversion . . . . . . . . . . . . . . . . . . . . 41 4.2 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.1 Skipping initial input currents in the output layer . . . . . . . 45 4.2.2 The number of phases in a period . . . . . . . . . . . . . . . 47 4.2.3 Accuracy-energy trade-off by early decision . . . . . . . . . . 50 4.2.4 Consideration on hardware implementation . . . . . . . . . . 52 4.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4.1 Comparison between SNN-RC and SNN-WS . . . . . . . . . 56 4.4.2 Trade-off by early decision . . . . . . . . . . . . . . . . . . . 64 4.4.3 Comparison with other algorithms . . . . . . . . . . . . . . . 67 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks 71 5.1 Modification of Binarized Neural Network . . . . . . . . . . . . . . . 72 5.1.1 Binarized Neural Network . . . . . . . . . . . . . . . . . . . 72 5.1.2 Use of 0 and 1 Activations . . . . . . . . . . . . . . . . . . . 72 5.1.3 Removal of Batch Normalization Layer . . . . . . . . . . . . 73 5.2 Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2.1 ReRAM Synaptic Array . . . . . . . . . . . . . . . . . . . . 75 5.2.2 Neuron Circuit . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.3 Issues with Neuron Circuit . . . . . . . . . . . . . . . . . . . 82 5.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.1 Variation Modeling . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.2 Impact of VT Variation . . . . . . . . . . . . . . . . . . . . . 87 5.3.3 Variation Compensation Techniques . . . . . . . . . . . . . . 88 5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.2 Accuracy of the Modified BNN Algorithm . . . . . . . . . . 94 5.4.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . 95 5.4.4 Performance Comparison . . . . . . . . . . . . . . . . . . . . 99 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6 Conclusion 102Docto

    ์ €์ „๋ ฅ ์„ค๊ณ„๋ฅผ ์œ„ํ•œ ์„ฑ๋Šฅ ๋ชจ๋‹ˆํ„ฐ๋ง ์‹œ์Šคํ…œ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2018. 8. ์ตœ๊ธฐ์˜.๋ฐ˜๋„์ฒด ๊ณต์ • ๊ธฐ์ˆ ์ด ์ง€์†์ ์œผ๋กœ ๋ฏธ์„ธํ™”๋จ์— ๋”ฐ๋ผ, ์ œ์กฐ ๋ฐ ๋™์ž‘ ํ™˜๊ฒฝ์— ๋”ฐ๋ฅธ ํšŒ๋กœ์˜ ์„ฑ๋Šฅ ๋ณ€๋™์ด ์ ์  ๋” ์‹ฌ๊ฐ ํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์„ฑ๋Šฅ ๋ณ€๋™์€ ์˜ˆ์ธกํ•˜๊ธฐ๊ฐ€ ๋งค์šฐ ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์— ์ถ”๊ฐ€์ ์ธ ์„ค๊ณ„ ๋งˆ์ง„์„ ํ•„์š”๋กœ ํ•˜๋Š”๋ฐ, ์ด๋Š” ์นฉ์˜ ๋ฉด์  ๋ฐ ์ „๋ ฅ ์†Œ๋น„๋ฅผ ์ฆ๊ฐ€์‹œํ‚จ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€์žฅ ์ด์ƒ์ ์ธ ๋ฐฉ๋ฒ•์€ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์„ฑ๋Šฅ ๋ณ€๋™์„ ์ธก์ •ํ•˜๊ณ , ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๋ฅผ ํ†ตํ•ด ์ ์ ˆํ•œ ์ „์••์„ ๊ณต๊ธ‰ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ํ•˜์ง€๋งŒ, ์ด ๊ธฐ๋ฒ•์˜ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋‹ˆํ„ฐ๋ง ํšŒ๋กœ์™€ ๋Œ€์ƒ ๋ธ”๋ก๊ณผ์˜ ์„ฑ๋Šฅ ์ƒ๊ด€ ๊ด€๊ณ„ ๋ถˆ์ผ์น˜์ด๋‹ค. ํฐ ๋ถˆ์ผ์น˜๋Š” ์˜คํžˆ๋ ค ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„ ๋ฐฉ๋ฒ•์˜ ์žฅ์ ์„ ์žƒ์„ ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘๋ฒ”์œ„ํ•œ ๋™์ž‘ ์ „์••์„ ๊ณ ๋ คํ•œ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ผ๋ฐ˜ ๋ชจ๋‹ˆํ„ฐ๋ฅผ ๊ฐ–์ถ˜ ์ƒˆ๋กœ์šด ์ง€์—ฐ ๋ชจ๋‹ˆํ„ฐ๋ง ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ์‹œ์Šคํ…œ์€ ๊ธฐ์กด ๋ชจ๋‹ˆํ„ฐ๋ง ๋ฐฉ๋ฒ•์— ๋น„ํ•ด ๋ชจ๋‹ˆํ„ฐ๋ง ํšŒ๋กœ์™€ ๋Œ€์ƒ ๋ธ”๋ก๊ฐ„์˜ ์„ฑ๋Šฅ ์ƒ๊ด€ ๊ด€๊ณ„๊ฐ€ ๋” ์ข‹๋‹ค. 14๋‚˜๋…ธ๋ฏธํ„ฐ FinFET ํ”„๋กœ์„ธ์„œ ์ฝ”์–ด์— ์ ์šฉํ•˜์—ฌ ์˜ค๋ฅ˜๋ฅผ ์ตœ๋Œ€ 91 %๊นŒ์ง€ ์ค„์—ฌ ์„ค๊ณ„ ๋งˆ์ง„์„ ์ค„์ด๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ „๋ ฅ ์†Œ๋น„๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๊ณ  ์ €๋น„์šฉ ์„ค๊ณ„๋ฅผ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ, ๋ณธ ๋…ผ๋ฌธ์€ ์นฉ์˜ ๋…ธํ›„ํ™” ๋ณด์ƒ ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃฌ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์นฉ์˜ ๋…ธํ›„ํ™”๋กœ ์ธํ•œ ์‹ ๋ขฐ์„ฑ ์ €ํ•˜๋Š” ์„ค๊ณ„ ๋งˆ์ง„์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๊ฒฐํ•œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๋ฐฉ๋ฒ•์€ ํšจ์œจ์ ์ด์ง€ ๋ชปํ•˜๋ฉฐ, ์„ค๊ณ„ ๊ณผ์ •์—์„œ ๋…ธํ›„ํ™” ์˜ํ–ฅ์„ ์ •ํ™•ํ•˜๊ฒŒ ์˜ˆ์ธก ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๋งŽ์€ ์–ด๋ ค์›€์ด ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ, ์ €์ „๋ ฅ ์„ค๊ณ„๋ฅผ ์œ„ํ•ด์„œ๋Š” ์‹ค์‹œ๊ฐ„์œผ๋กœ ์นฉ์˜ ๋…ธํ›„ํ™”๋กœ ์ธํ•œ ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ์ธก์ •ํ•˜๊ณ  ์ด๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ๋ณด์ƒํ•ด ์ค˜์•ผ ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ ๋ขฐ๋„ ์„ค๊ณ„ ๋งˆ์ง„์ด๋‚˜ ์ „์••์˜ ์ฆ๊ฐ€ ์—†์ด ์นฉ ๋…ธํ›„ํ™”๋กœ ์ธํ•œ ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ๊ทผ์‚ฌ๊ณ„์‚ฐ์œผ๋กœ ๋ณด์ƒํ•˜๋Š” ์ƒˆ๋กœ์šด ์„ค๊ณ„ ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ์‹ค์‹œ๊ฐ„ ์„ฑ๋Šฅ ๋ชจ๋‹ˆํ„ฐ๋ง ์‹œ์Šคํ…œ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ, ์ตœ์ข… ๋ชฉํ‘œ๋Š” ์„ค๊ณ„ ๋งˆ์ง„์„ ์—†์• ๊ณ , ์นฉ์˜ ๋…ธํ›„ํ™”๋กœ ์ธํ•œ ์„ฑ๋Šฅ์ €ํ•˜๋ฅผ ์ •ํ™•๋„๋ฅผ ๋‚ฎ์ถ”๋Š” ๊ทผ์‚ฌ๊ณ„์‚ฐ์œผ๋กœ ๋ณด์ƒํ•˜์—ฌ ์นฉ์˜ ์ „๋ ฅ์†Œ๋น„๋ฅผ ๊ฐ์†Œ์‹œํ‚ค๋Š” ๊ฒƒ์ด๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๊ตฌ์„ฑ ์š”์†Œ ๋ ˆ๋ฒจ๊ณผ ๋งˆ์ดํฌ๋กœ ์•„ํ‚คํ…์ฒ˜ ์‹œ์Šคํ…œ ๋ ˆ๋ฒจ์—์„œ ํ‰๊ฐ€ํ•œ ๊ฒฐ๊ณผ, ๊ตฌ์„ฑ ์š”์†Œ ๋ ˆ๋ฒจ์—์„œ๋Š” ์˜ค๋ฅ˜์˜ ํ‰๊ท ์ œ๊ณฑ๊ฐ’์—์„œ ์ƒ๋‹นํ•œ ๊ฐœ์„ ์„ ๋ณด์—ฌ์ฃผ๊ณ , ์‹œ์Šคํ…œ ๋ ˆ๋ฒจ์—์„œ๋Š” ํฐ ํ’ˆ์งˆ ์ €ํ•˜ ์—†์ด ๋…ธํ›„ํ™”๋กœ ์ธํ•œ ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ๋ณด์ƒํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด 0.4%์˜ ๋ฉด์  ์ฆ๊ฐ€๋กœ ์ „๋ ฅ ์†Œ๋น„๋ฅผ 19.8 % ๊ฐ์†Œ์‹œ์ผฐ๋‹ค.Contents Abstract i Contents iii List of Figures vii List of Tables ix Part I Delay Monitoring System with Multiple Generic Monitors for Wide Voltage Range Operation 1 Chapter 1 Introduction 3 Chapter 2 Background and Related Work 7 2.1 Open-loop DVFS Scheme 8 2.2 Closed-loop DVFS Scheme 8 2.3 Related Work on Monitoring Circuits 9 Chapter 3 Proposed Circuit and Scheme 13 3.1 Conventional Approach with a Generic Monitor 13 3.2 Proposed Monitoring Circuit 15 3.3 Adaptive Chain Selection Scheme: Hardware Approach 18 3.4 Weighted Summation Scheme: Software Approach 22 3.5 Operating Scenario 24 Chapter 4 Design Methodology of Proposed System 27 Chapter 5 Experimental Result 31 5.1 Experimental Setup 31 5.2 Accuracy Results on Critical Paths 32 5.3 Accuracy Results on a Representative Critical Path 38 5.4 Area Overhead and Accuracy Comparison 43 Chapter 6 Conclusion 45 Part II Aging Gracefully with Approximation 47 Chapter 7 Introduction 49 Chapter 8 Motivational Case Study and Related Work 53 8.1 Motivational Case Study 53 8.2 Related Work 55 Chapter 9 Proposed System 59 9.1 Overview of the Proposed System 59 9.2 Proposed Adder 60 9.3 Monitoring Circuit 63 9.4 Aging Compensation Scheme 65 Chapter 10 Design Methodology of Proposed System 67 Chapter 11 Experimental Result 71 11.1 Experimental Setup 71 11.2 RTL Component Level 73 11.3 Microarchitecture Level 76 Chapter 12 Conclusion 81 Bibliography 83 ๊ตญ๋ฌธ์ดˆ๋ก 91 Docto
    corecore