360 research outputs found

    An Extended CMOS ISFET Model Incorporating the Physical Design Geometry and the Effects on Performance and Offset Variation

    No full text
    This paper presents an extended model for the CMOS-based ion-sensitive field-effect transistor, incorporating design parameters associated with the physical geometry of the device. This can, for the first time, provide a good match between calculated and measured characteristics by taking into account the effects of nonidealities such as threshold voltage variation and sensor noise. The model is evaluated through a number of devices with varying design parameters (chemical sensing area and MOSFET dimensions) fabricated in a commercially available 0.35-ยตm CMOS technology. Threshold voltage, subthreshold slope, chemical sensitivity, drift, and noise were measured and compared with the simulated results. The first- and second-order effects are analyzed in detail, and it is shown that the sensors' performance was in agreement with the proposed model

    ํ›ˆ๋ จ ์ž๋ฃŒ ์ž๋™ ์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๊ธฐ๊ณ„ ํ•™์Šต์„ ํ†ตํ•œ SAR ์˜์ƒ ๊ธฐ๋ฐ˜์˜ ์„ ๋ฐ• ํƒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ์ง€๊ตฌํ™˜๊ฒฝ๊ณผํ•™๋ถ€, 2021. 2. ๊น€๋•์ง„.Detection and surveillance of vessels are regarded as a crucial application of SAR for their contribution to the preservation of marine resources and the assurance on maritime safety. Introduction of machine learning to vessel detection significantly enhanced the performance and efficiency of the detection, but a substantial majority of studies focused on modifying the object detector algorithm. As the fundamental enhancement of the detection performance would be nearly impossible without accurate training data of vessels, this study implemented AIS information containing real-time information of vesselโ€™s movement in order to propose a robust algorithm which acquires the training data of vessels in an automated manner. As AIS information was irregularly and discretely obtained, the exact target interpolation time for each vessel was precisely determined, followed by the implementation of Kalman filter, which mitigates the measurement error of AIS sensor. In addition, as the velocity of each vessel renders an imprint inside the SAR image named as Doppler frequency shift, it was calibrated by restoring the elliptic satellite orbit from the satellite state vector and estimating the distance between the satellite and the target vessel. From the calibrated position of the AIS sensor inside the corresponding SAR image, training data was directly obtained via internal allocation of the AIS sensor in each vessel. For fishing boats, separate information system named as VPASS was applied for the identical procedure of training data retrieval. Training data of vessels obtained via the automated training data procurement algorithm was evaluated by a conventional object detector, for three detection evaluating parameters: precision, recall and F1 score. All three evaluation parameters from the proposed training data acquisition significantly exceeded that from the manual acquisition. The major difference between two training datasets was demonstrated in the inshore regions and in the vicinity of strong scattering vessels in which land artifacts, ships and the ghost signals derived from them were indiscernible by visual inspection. This study additionally introduced a possibility of resolving the unclassified usage of each vessel by comparing AIS information with the accurate vessel detection results.์ „์ฒœํ›„ ์ง€๊ตฌ ๊ด€์ธก ์œ„์„ฑ์ธ SAR๋ฅผ ํ†ตํ•œ ์„ ๋ฐ• ํƒ์ง€๋Š” ํ•ด์–‘ ์ž์›์˜ ํ™•๋ณด์™€ ํ•ด์ƒ ์•ˆ์ „ ๋ณด์žฅ์— ๋งค์šฐ ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•œ๋‹ค. ๊ธฐ๊ณ„ ํ•™์Šต ๊ธฐ๋ฒ•์˜ ๋„์ž…์œผ๋กœ ์ธํ•ด ์„ ๋ฐ•์„ ๋น„๋กฏํ•œ ์‚ฌ๋ฌผ ํƒ์ง€์˜ ์ •ํ™•๋„ ๋ฐ ํšจ์œจ์„ฑ์ด ํ–ฅ์ƒ๋˜์—ˆ์œผ๋‚˜, ์ด์™€ ๊ด€๋ จ๋œ ๋‹ค์ˆ˜์˜ ์—ฐ๊ตฌ๋Š” ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฐœ๋Ÿ‰์— ์ง‘์ค‘๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ํƒ์ง€ ์ •ํ™•๋„์˜ ๊ทผ๋ณธ์ ์ธ ํ–ฅ์ƒ์€ ์ •๋ฐ€ํ•˜๊ฒŒ ์ทจ๋“๋œ ๋Œ€๋Ÿ‰์˜ ํ›ˆ๋ จ์ž๋ฃŒ ์—†์ด๋Š” ๋ถˆ๊ฐ€๋Šฅํ•˜๊ธฐ์—, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์„ ๋ฐ•์˜ ์‹ค์‹œ๊ฐ„ ์œ„์น˜, ์†๋„ ์ •๋ณด์ธ AIS ์ž๋ฃŒ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ธ๊ณต ์ง€๋Šฅ ๊ธฐ๋ฐ˜์˜ ์„ ๋ฐ• ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์‚ฌ์šฉ๋  ํ›ˆ๋ จ์ž๋ฃŒ๋ฅผ ์ž๋™์ ์œผ๋กœ ์ทจ๋“ํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์ด์‚ฐ์ ์ธ AIS ์ž๋ฃŒ๋ฅผ SAR ์˜์ƒ์˜ ์ทจ๋“์‹œ๊ฐ์— ๋งž์ถ”์–ด ์ •ํ™•ํ•˜๊ฒŒ ๋ณด๊ฐ„ํ•˜๊ณ , AIS ์„ผ์„œ ์ž์ฒด๊ฐ€ ๊ฐ€์ง€๋Š” ์˜ค์ฐจ๋ฅผ ์ตœ์†Œํ™”ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ์ด๋™ํ•˜๋Š” ์‚ฐ๋ž€์ฒด์˜ ์‹œ์„  ์†๋„๋กœ ์ธํ•ด ๋ฐœ์ƒํ•˜๋Š” ๋„ํ”Œ๋Ÿฌ ํŽธ์ด ํšจ๊ณผ๋ฅผ ๋ณด์ •ํ•˜๊ธฐ ์œ„ํ•ด SAR ์œ„์„ฑ์˜ ์ƒํƒœ ๋ฒกํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ์œ„์„ฑ๊ณผ ์‚ฐ๋ž€์ฒด ์‚ฌ์ด์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ •๋ฐ€ํ•˜๊ฒŒ ๊ณ„์‚ฐํ•˜์˜€๋‹ค. ์ด๋ ‡๊ฒŒ ๊ณ„์‚ฐ๋œ AIS ์„ผ์„œ์˜ ์˜์ƒ ๋‚ด์˜ ์œ„์น˜๋กœ๋ถ€ํ„ฐ ์„ ๋ฐ• ๋‚ด AIS ์„ผ์„œ์˜ ๋ฐฐ์น˜๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ์„ ๋ฐ• ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํ›ˆ๋ จ์ž๋ฃŒ ํ˜•์‹์— ๋งž์ถ”์–ด ํ›ˆ๋ จ์ž๋ฃŒ๋ฅผ ์ทจ๋“ํ•˜๊ณ , ์–ด์„ ์— ๋Œ€ํ•œ ์œ„์น˜, ์†๋„ ์ •๋ณด์ธ VPASS ์ž๋ฃŒ ์—ญ์‹œ ์œ ์‚ฌํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ๊ฐ€๊ณตํ•˜์—ฌ ํ›ˆ๋ จ์ž๋ฃŒ๋ฅผ ์ทจ๋“ํ•˜์˜€๋‹ค. AIS ์ž๋ฃŒ๋กœ๋ถ€ํ„ฐ ์ทจ๋“ํ•œ ํ›ˆ๋ จ์ž๋ฃŒ๋Š” ๊ธฐ์กด ๋ฐฉ๋ฒ•๋Œ€๋กœ ์ˆ˜๋™ ์ทจ๋“ํ•œ ํ›ˆ๋ จ์ž๋ฃŒ์™€ ํ•จ๊ป˜ ์ธ๊ณต ์ง€๋Šฅ ๊ธฐ๋ฐ˜ ์‚ฌ๋ฌผ ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ†ตํ•ด ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๊ทธ ๊ฒฐ๊ณผ, ์ œ์‹œ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ์ทจ๋“ํ•œ ํ›ˆ๋ จ ์ž๋ฃŒ๋Š” ์ˆ˜๋™ ์ทจ๋“ํ•œ ํ›ˆ๋ จ ์ž๋ฃŒ ๋Œ€๋น„ ๋” ๋†’์€ ํƒ์ง€ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€์œผ๋ฉฐ, ์ด๋Š” ๊ธฐ์กด์˜ ์‚ฌ๋ฌผ ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํ‰๊ฐ€ ์ง€ํ‘œ์ธ ์ •๋ฐ€๋„, ์žฌํ˜„์œจ๊ณผ F1 score๋ฅผ ํ†ตํ•ด ์ง„ํ–‰๋˜์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•œ ํ›ˆ๋ จ์ž๋ฃŒ ์ž๋™ ์ทจ๋“ ๊ธฐ๋ฒ•์œผ๋กœ ์–ป์€ ์„ ๋ฐ•์— ๋Œ€ํ•œ ํ›ˆ๋ จ์ž๋ฃŒ๋Š” ํŠนํžˆ ๊ธฐ์กด์˜ ์„ ๋ฐ• ํƒ์ง€ ๊ธฐ๋ฒ•์œผ๋กœ๋Š” ๋ถ„๋ณ„์ด ์–ด๋ ค์› ๋˜ ํ•ญ๋งŒ์— ์ธ์ ‘ํ•œ ์„ ๋ฐ•๊ณผ ์‚ฐ๋ž€์ฒด ์ฃผ๋ณ€์˜ ์‹ ํ˜ธ์— ๋Œ€ํ•œ ์ •ํ™•ํ•œ ๋ถ„๋ณ„ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ด์™€ ํ•จ๊ป˜, ์„ ๋ฐ• ํƒ์ง€ ๊ฒฐ๊ณผ์™€ ํ•ด๋‹น ์ง€์—ญ์— ๋Œ€ํ•œ AIS ๋ฐ VPASS ์ž๋ฃŒ๋ฅผ ์ด์šฉํ•˜์—ฌ ์„ ๋ฐ•์˜ ๋ฏธ์‹๋ณ„์„ฑ์„ ํŒ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ ๋˜ํ•œ ์ œ์‹œํ•˜์˜€๋‹ค.Chapter 1. Introduction - 1 - 1.1 Research Background - 1 - 1.2 Research Objective - 8 - Chapter 2. Data Acquisition - 10 - 2.1 Acquisition of SAR Image Data - 10 - 2.2 Acquisition of AIS and VPASS Information - 20 - Chapter 3. Methodology on Training Data Procurement - 26 - 3.1 Interpolation of Discrete AIS Data - 29 - 3.1.1 Estimation of Target Interpolation Time for Vessels - 29 - 3.1.2 Application of Kalman Filter to AIS Data - 34 - 3.2 Doppler Frequency Shift Correction - 40 - 3.2.1 Theoretical Basis of Doppler Frequency Shift - 40 - 3.2.2 Mitigation of Doppler Frequency Shift - 48 - 3.3 Retrieval of Training Data of Vessels - 53 - 3.4 Algorithm on Vessel Training Data Acquisition from VPASS Information - 61 - Chapter 4. Methodology on Object Detection Architecture - 66 - Chapter 5. Results - 74 - 5.1 Assessment on Training Data - 74 - 5.2 Assessment on AIS-based Ship Detection - 79 - 5.3 Assessment on VPASS-based Fishing Boat Detection - 91 - Chapter 6. Discussions - 110 - 6.1 Discussion on AIS-Based Ship Detection - 110 - 6.2 Application on Determining Unclassified Vessels - 116 - Chapter 7. Conclusion - 125 - ๊ตญ๋ฌธ ์š”์•ฝ๋ฌธ - 128 - Bibliography - 130 -Maste

    ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ์…€ ์ŠคํŠธ๋ง ๊ธฐ๋ฐ˜์˜ ์‹œ๋ƒ…ํ‹ฑ ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021.8. ์ด์ข…ํ˜ธ.Neuromorphic computing using synaptic devices has been proposed to efficiently process vector-matrix multiplication (VMM) which is a significant task in DNN. Until now, resistive RAM (RRAM) was mainly used as synaptic devices for neuromorphic computing. However, a number of limitations still exist for RRAMs to implement a large-scale synaptic device array due to device nonideality such as variation, endurance and monolithic integration of RRAMs and CMOS peripheral circuits. Due to these problems, SRAM cells, which are mature silicon memory, have been proposed as synaptic devices. However, SRAM occupies large area (~150 F2 per bitcell) and on-chip SRAM capacity (~a few MB) is insufficient to accommodate a large number of parameters. In this dissertation, synaptic architectures based on NAND flash cell strings are proposed for off-chip learning and on-chip learning. A novel synaptic architecture based on NAND cell strings is proposed as a high-density synapse capable of XNOR operation for binary neural networks (BNNs) in off-chip learning. By changing the threshold voltage of NAND flash cells and input voltages in complementary fashion, the XNOR operation is successfully demonstrated. The large on/off current ratio (~7ร—105) of NAND flash cells can implement high-density and highly reliable BNNs without error correction codes. We propose a novel synaptic architecture based on a NAND flash memory for highly robust and high-density quantized neural networks (QNN) with 4-bit weight. Quantization training can minimize the degradation of the inference accuracy compared to post-training quantization. The proposed operation scheme can implement QNN with higher inference accuracy compared to BNN. On-chip learning can significantly reduce time and energy consumption during training, compensate the weight variation of synaptic devices, and can adapt to changing environment in real time. On-chip learning using the high-density advantage of NAND flash memory structure is of great significance. However, the conventional on-chip learning method used for RRAM array cannot be utilized when using NAND flash cells as synaptic devices because of the cell string structure of NAND flash memory. In this work, a novel synaptic array architecture enabling forward propagation (FP) and backward propagation (BP) in the NAND flash memory is proposed for on-chip learning. In the proposed synaptic architecture, positive synaptic weight and negative synaptic weight are separated in different array to enable weights to be transposed correctly. In addition, source-lines (SL) are separated, which is different from conventional NAND flash memory, to enable both the FP and BP in the NAND flash memory. By applying input and error input to bit-lines (BL) and string-select lines (SSL) in NAND cell array, respectively, accurate vector-matrix multiplication is successfully performed in both FP and BP eliminating the effect of pass cells. The proposed on-chip learning system is much more robust to weight variation compared to the off-chip learning system. Finally, superiority of the proposed on-chip learning architecture is verified by circuit simulation of a neural network.DNN์—์„œ ์ค‘์š”ํ•œ ์ž‘์—…์ธ ๋ฒกํ„ฐ-๋งคํŠธ๋ฆญ์Šค ๊ณฑ์…ˆ (VMM)์„ ํšจ์œจ์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์‹œ๋ƒ…์Šค ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์ด ํ™œ๋ฐœํžˆ ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ๋‹ค. ์ง€๊ธˆ๊นŒ์ง€ RRAM (Resistive RAM)์ด ์ฃผ๋กœ ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์˜ ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ RRAM์€ ์†Œ์ž์˜ ์‚ฐํฌ๊ฐ€ ํฌ๊ณ  ์‹ ๋ขฐ์„ฑ์ด ์ข‹์ง€ ์•Š์œผ๋ฉฐ CMOS ์ฃผ๋ณ€ ํšŒ๋กœ์™€ ํ†ตํ•ฉ์ด ์–ด๋ ค์šด ๋ฌธ์ œ๋กœ ์ธํ•ด ๋Œ€๊ทœ๋ชจ ์‹œ๋ƒ…์Šค ์†Œ์ž ์–ด๋ ˆ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ๋Š” ์—ฌ์ „ํžˆ ๋งŽ์€ ์ œํ•œ์ด ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋กœ ์ธํ•ด ์„ฑ์ˆ™ํ•œ ์‹ค๋ฆฌ์ฝ˜ ๋ฉ”๋ชจ๋ฆฌ์ธ SRAM ์…€์ด ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์ œ์•ˆ๋˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ SRAM์€ ์…€ ๋‹น ๋ฉด์  (~150 F2 per bitcell)์ด ํฌ๊ณ  ๋˜ํ•œ ์˜จ์นฉ SRAM ์šฉ๋Ÿ‰ (~a few MB) ์€ ๋งŽ์€ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ˆ˜์šฉํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์˜คํ”„ ์นฉ ํ•™์Šต๊ณผ ์˜จ ์นฉ ํ•™์Šต์„ ์œ„ํ•ด NAND ํ”Œ๋ž˜์‹œ ์…€ ์ŠคํŠธ๋ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. NAND ์…€ ์ŠคํŠธ๋ง ๊ธฐ๋ฐ˜์˜ ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜๋Š” ์˜คํ”„ ์นฉ ํ•™์Šต์—์„œ ์ด์ง„ ์‹ ๊ฒฝ๋ง (BNN)์„ ์œ„ํ•œ XNOR ์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅํ•œ ๊ณ ๋ฐ€๋„ ์‹œ๋ƒ…์Šค๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ์ƒํ˜ธ ๋ณด์™„์ ์ธ ๋ฐฉ์‹์œผ๋กœ NAND ํ”Œ๋ž˜์‹œ ์…€์˜ ์ž„๊ณ„ ์ „์••๊ณผ ์ž…๋ ฅ ์ „์••์„ ๋ณ€๊ฒฝํ•จ์œผ๋กœ์จ XNOR ์—ฐ์‚ฐ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•œ๋‹ค. NAND ํ”Œ๋ž˜์‹œ ์…€์˜ ํฐ ์˜จ/์˜คํ”„ ์ „๋ฅ˜ ๋น„์œจ(~ 7x105)์€ ECC ์—†์ด ๊ณ ๋ฐ€๋„ ๋ฐ ๊ณ ์‹ ๋ขฐ์„ฑ์˜ BNN์„ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์šฐ๋ฆฌ๋Š” 4๋น„ํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ–๋Š” ๋งค์šฐ ๊ฒฌ๊ณ ํ•˜๋ฉฐ ๊ณ ์ง‘์ ์˜ ์–‘์žํ™”๋œ ์‹ ๊ฒฝ๋ง(QNN)์„ ์œ„ํ•œ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์˜ ์ƒˆ๋กœ์šด ์‹œ๋ƒ…ํ‹ฑ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์–‘์žํ™” ํ•™์Šต์€ ํ›ˆ๋ จ ํ›„ ์–‘์žํ™”์— ๋น„ํ•ด ์ถ”๋ก  ์ •ํ™•๋„์˜ ์ €ํ•˜๋ฅผ ์ตœ์†Œํ™”ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋™์ž‘ ๋ฐฉ์‹์€ BNN์— ๋น„ํ•ด ๋” ๋†’์€ ์ถ”๋ก  ์ •ํ™•๋„๋ฅผ ๊ฐ€์ง€๋Š” QNN์„ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜จ ์นฉ ํ•™์Šต์€ ํ›ˆ๋ จ ์ค‘ ์‹œ๊ฐ„๊ณผ ์—๋„ˆ์ง€ ์†Œ๋น„๋ฅผ ํฌ๊ฒŒ ์ค„์ด๊ณ  ์‹œ๋ƒ…์Šค ์†Œ์ž์˜ ์‚ฐํฌ๋ฅผ ๋ณด์ƒํ•˜๋ฉฐ ๋ณ€ํ™”ํ•˜๋Š” ํ™˜๊ฒฝ์— ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ ์‘ํ•  ์ˆ˜ ์žˆ๋‹ค. NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ์˜ ๋†’์€ ์ง‘์ ๋„๋ฅผ ์‚ฌ์šฉํ•œ ์˜จ ์นฉ ํ•™์Šต์€ ๋งค์šฐ ์œ ์šฉํ•˜๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ RRAM ์–ด๋ ˆ์ด์— ์‚ฌ์šฉ๋˜๋Š” ์˜จ ์นฉ ํ•™์Šต ๋ฐฉ๋ฒ•์€ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์…€ ์ŠคํŠธ๋ง ๊ตฌ์กฐ๋กœ ์ธํ•ด NAND ํ”Œ๋ž˜์‹œ ์…€์„ ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ํ™œ์šฉํ•  ์ˆ˜ ์—†๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ๋Š” ์˜จ ์นฉ ํ•™์Šต์„ ์œ„ํ•ด NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ (FP) ๋ฐ ์—ญ๋ฐฉํ–ฅ ์ „ํŒŒ (BP)๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜์—์„œ๋Š” ๊ฐ€์ค‘์น˜๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ „์น˜๋  ์ˆ˜ ์žˆ๋„๋ก ์–‘์˜ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜์™€ ์Œ์˜ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅธ ์–ด๋ ˆ์ด๋กœ ๋ถ„๋ฆฌ๋œ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ์†Œ์Šค ๋ผ์ธ (SL)์„ ๋ถ„๋ฆฌํ•˜์—ฌ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ์™€ ์—ญ๋ฐฉํ–ฅ ์ „ํŒŒ๋ฅผ ๋ชจ๋‘ ์—ฐ์‚ฐํ•  ์ˆ˜ ์žˆ๋‹ค. NAND ์…€ ์–ด๋ ˆ์ด์˜ ๋น„ํŠธ ๋ผ์ธ (BL) ๋ฐ ์ŠคํŠธ๋ง ์„ ํƒ ๋ผ์ธ (SSL)์— ๊ฐ๊ฐ ์ž…๋ ฅ ๋ฐ ์˜ค๋ฅ˜ ์ž…๋ ฅ์„ ์ธ๊ฐ€ํ•จ์œผ๋กœ์จ PASS ์…€์˜ ํšจ๊ณผ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ ๋ฐ ์—ญ๋ฐ•ํ–ฅ ์ „ํŒŒ ๋ชจ๋‘์—์„œ ์ •ํ™•ํ•œ ๋ฒกํ„ฐ ํ–‰๋ ฌ ๊ณฑ์…ˆ์ด ์„ฑ๊ณต์ ์œผ๋กœ ์ˆ˜ํ–‰๋˜๋„๋ก ํ•œ๋‹ค. ์ œ์•ˆ๋œ ์˜จ ์นฉ ํ•™์Šต ์‹œ์Šคํ…œ์€ ์˜คํ”„ ์นฉ ํ•™์Šต ์‹œ์Šคํ…œ์— ๋น„ํ•ด ์†Œ์ž์˜ ์‚ฐํฌ์— ๋Œ€ํ•ด ํ›จ์”ฌ ์˜ํ–ฅ์ด ์ ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ œ์•ˆ๋œ ์˜จ ์นฉ ํ•™์Šต ์•„ํ‚คํ…์ฒ˜์˜ ์šฐ์ˆ˜์„ฑ์„ ์‹ ๊ฒฝ๋ง์˜ ํšŒ๋กœ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ๊ฒ€์ฆํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Background 1 Chapter 2 Binary neural networks based on NAND flash memory 7 2.1 Synaptic architecture for BNN 7 2.2 Measurement results 13 2.3 Binary neuron circuit 23 2.4 Simulation results 27 2.5 Differential scheme 32 2.5.1 Differential synaptic architecture 32 2.5.2 Simulation results 41 Chapter 3 Quantized neural networks based on NAND flash memory 47 3.1 Synaptic architecture for QNN 47 3.2 Measurement results 55 3.3 Simulation results 66 Chapter 4 On-chip learning based on NAND flash memory 74 4.1 Synaptic architecture for on-chip learning 74 4.2 Measurement results 82 4.3 Neuron circuits 90 4.4 Simulation results 93 Chapter 5 Conclusion 100 Bibliography 104 Abstract in Korean 111๋ฐ•

    06. Bare Passive Agent Hierarchy

    Get PDF

    German Passives and English Benefactives: The Need for Non-canonical Accusative Case

    Get PDF
    In both English benefactive constructions (John baked Mary a cake) and German kriegen/bekommen-passives (Er kriegte einen Stift geschenkt โ€˜He got a pen giftedโ€™), the theme argument is accusative-marked but has no way of getting structural accusative case. In English benefactive constructions, this is because the beneficiary argument intervenes between the voice head and the theme, and in German kriegen/bekommen-passives, it is because there is no active voice head. This paper proposes that, in both languages, the applicative head introducing the beneficiary/recipient (more generally, the affectee argument), comes with an extra case feature that can license case on the theme argument. In English, this non-canonical accusative case feature comes with the regular applicative head introducing the beneficiary argument. In contrast, in German, it comes with a defective applicative head which introduces the recipient but is unable to assign to it the inherent dative case that normally comes with the Affectee theta-role. The paper offers a unified analysis of English and German double object constructions and also of German werden (โ€˜beโ€™) and kriegen/bekommen (โ€˜getโ€™)-passives

    Station Readiness Test for the Earth Resources Technology Satellite (ERTS) Mission

    Get PDF
    The purpose of this SRT is to establish testing procedures which will verify that ERTS supporting stations can effectively support the ERTS mission. This SRT is applicable to all supporting stations for the ERTS-A and ERTS-B mission

    ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์„ ์œ„ํ•œ 3D ๋ฐ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์‹œ๋ƒ…์Šค ๋ชจ๋ฐฉ์†Œ์ž

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€,2019. 8. ์ด์ข…ํ˜ธ.Conventional Von Neumann architecture performs computation on a CPU, which is connected to a memory device (DRAM) via buses. As a result, power and speed are the biggest limitations of todays high-capacity computing. On the other hand, neuromorphic computing, which imitates the computation of brain, is highly desired due to the ability to operate memory and logic in parallel. A neuromorphic computing is accomplished by the modification of connection strength of neurons and synapses. Synapses have a memory function similar to that used in conventional computing. For a successful computation, a synaptic device should have following characteristics: high scalability, low power, high speed, high reliability, and non-volatility. Unfortunately, there are no standardized devices that satisfy all these conditions. Even though various memristor-based devices have been reported as synaptic devices, the devices are still being studied for reliability and variation issues. NOR flash is much more mature, but it is less scalable because of the large area of one memory cell (~10F2). In addition, high-density NAND flash memory is difficult to implement the current summation required for neuromorphic computing because the cells are connected in series in a string. In this dissertation, I propose a new synaptic device that can effectively perform neuromorphic computing in a 3-D stack structure. A new 3-D synaptic device with stackable AND-type Rounded Dual Channel (RDC) flash memory architecture operates at low power by using the FN program/erase method, utilizes a high speed by a parallel read operation, and performs in a high density with multi-layer stacking. Key fabrication steps are explained and the successful operation of the device in 3-D stacked structure is verified by device simulation. In addition, devices are fabricated by stacking three layers, and their operation is confirmed. The proposed 3-D stacked AND architecture and device structure proposed in this work is expected to be a promising candidate for highly integrated synaptic devices.๊ธฐ์กด์˜ "Von Neumann ์•„ํ‚คํ…์ฒ˜"๋Š” ๋ฒ„์Šค๋ฅผ ํ†ตํ•ด ๋ฉ”๋ชจ๋ฆฌ ์žฅ์น˜์— ์—ฐ๊ฒฐ๋œ ์ปดํ“จํ„ฐ์—์„œ ๊ณ„์‚ฐ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์ด์™€ ๊ฐ™์€ ๊ตฌ์กฐ๋Š” ์ „๋ ฅ ๋ฐ ์†๋„์˜ ๋‹จ์ ์„ ๋ณด์ด๋Š”๋ฐ, ์˜ค๋Š˜๋‚  ๊ณ ์šฉ๋Ÿ‰ ์ปดํ“จํŒ…์˜ ๊ฐ€์žฅ ํฐ ํ•œ๊ณ„๋ผ ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ฐ˜๋ฉด, ๋‡Œ์˜ ๊ณ„์‚ฐ์„ ๋ชจ๋ฐฉํ•˜๋Š” ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์€ ๋ฉ”๋ชจ๋ฆฌ์™€ ๋กœ์ง์„ ๋ณ‘๋ ฌ๋กœ ๋™์ž‘ ์‹œํ‚ค๋ฏ€๋กœ ๊ณ ์šฉ๋Ÿ‰์˜ ์ปดํ“จํŒ…์— ๋งค์šฐ ๋ฐ”๋žŒ์งํ•˜๋‹ค. ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์€ ๋‰ด๋Ÿฐ๊ณผ ์‹œ๋ƒ…์Šค์˜ ์—ฐ๊ฒฐ ๊ฐ•๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋œ๋‹ค. ์‹œ๋ƒ…์Šค๋Š” ๊ธฐ์กด ์ปดํ“จํŒ…์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๋ฉ”๋ชจ๋ฆฌ์™€ ์œ ์‚ฌํ•œ ๊ธฐ๋Šฅ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ, ์„ฑ๊ณต์ ์ธ ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์„ ์œ„ํ•ด ์‹œ๋ƒ…์Šค ์†Œ์ž๋Š” ๊ณ ์ง‘์ , ์ €์ „๋ ฅ, ๋น ๋ฅธ ์†๋ ฅ, ๋†’์€ ์•ˆ์ •์„ฑ ๋ฐ ๋น„ ํœ˜๋ฐœ์„ฑ๊ณผ ๊ฐ™์€ ํŠน์„ฑ์„ ๊ฐ€์ ธ์•ผ ํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์•„์ง๊นŒ์ง€ ์ด๋Ÿฌํ•œ ๋ชจ๋“  ์กฐ๊ฑด์„ ์ถฉ์กฑํ•˜๋Š” ํ‘œ์ค€ํ™”๋œ ์†Œ์ž๋Š” ์—†๋‹ค. ๋‹ค์–‘ํ•œ ๋ฉค๋ฆฌ์Šคํ„ฐ ๊ธฐ๋ฐ˜์˜ ์žฅ์น˜๊ฐ€ ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ๋ณด๊ณ ๋˜์—ˆ์ง€๋งŒ ์—ฌ์ „ํžˆ ์†Œ์ž ์•ˆ์ •์„ฑ๊ณผ ์‚ฐํฌ ๋ฌธ์ œ๊ฐ€ ์กด์žฌํ•œ๋‹ค. NOR ํ”Œ๋ž˜์‹œ๋Š” ํ›จ์”ฌ ์„ฑ์ˆ™๋œ ์†Œ์ž์ด๋‚˜, ๋ฉ”๋ชจ๋ฆฌ ์…€(~ 10F2)์˜ ํฐ ์‚ฌ์ด์ฆˆ ๋•Œ๋ฌธ์— ์ง‘์ ๋„๊ฐ€ ๋–จ์–ด์ง„๋‹ค. ํ•œํŽธ, ๊ณ ์ง‘์  NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ๋Š” ์…€์ด ์—ฐ์†์œผ๋กœ ์ง๋ ฌ ์—ฐ๊ฒฐ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋‰ด๋กœ๋ชจํ”ฝ ์—ฐ์‚ฐ์— ํ•„์š”ํ•œ ์ „๋ฅ˜ ๊ณฑ์˜ ํ•ฉ์„ ๊ตฌํ˜„ํ•˜๊ธฐ๊ฐ€ ์–ด๋ ต๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ํšจ๊ณผ์ ์œผ๋กœ ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด 3์ฐจ์› ์Šคํƒ ๊ตฌ์กฐ ์‹œ๋ƒ…์Šค ์†Œ์ž๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์Šคํƒ์œผ๋กœ ๊ตฌ์„ฑ๋œ AND ํ˜• RDC (Rounded Dual Channel) ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์†Œ์ž๋Š” FN ํ”„๋กœ๊ทธ๋žจ/์‚ญ์ œ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ €์ „๋ ฅ์œผ๋กœ ์ž‘๋™ํ•˜๊ณ , ๋ณ‘๋ ฌ ์ฝ๊ธฐ ๋™์ž‘์œผ๋กœ ๋น ๋ฅธ ์†๋„๋ฅผ ๋ณด์ด๋ฉฐ, ์Šคํƒ์„ ํ†ตํ•ด ๊ณ ์ง‘์  ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ์ œ์•ˆ๋œ 3์ฐจ์› AND ๊ตฌ์กฐ ์†Œ์ž์˜ ์‹œ๋ƒ…์Šค ๊ฐ€๋Šฅ์„ฑ์„ ๊ฒ€์ฆํ•˜๊ธฐ ์œ„ํ•ด, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ œ์•ˆ๋œ ์†Œ์ž์˜ ์ฃผ์š” ์ œ์กฐ ๋‹จ๊ณ„๊ฐ€ ์„ค๋ช…๋˜๋ฉฐ, 3์ฐจ์› ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ์ ์ธต ๊ตฌ์กฐ์—์„œ ์†Œ์ž์˜ ๋™์ž‘์„ ํ™•์ธํ•œ๋‹ค. ๋˜ํ•œ, 3์ธต ์†Œ์ž๋ฅผ ์ œ์ž‘ํ•˜์—ฌ ๋‹ค์ˆ˜์ธต์— ๋Œ€ํ•œ ์ ์ธต ๊ฐ€๋Šฅ์„ฑ๊ณผ ์‹ค์ œ ๋™์ž‘์„ ๊ฒ€์ฆํ•œ๋‹ค.Chapter1 Introduction...........................................................................................1 1.1 Study background............................................................................1 1.2 Purpose of research..........................................................................6 1.3 Thesis outline...................................................................................7 Chapter2 New synaptic device...............................................................................8 2.1 Architecture.....................................................................................8 2.2 Operation method..........................................................................15 2.3 Device Simulation...........................................................................19 Chapter3 Device fabrication.................................................................................24 3.1 Overall process flow........................................................................24 3.2 Cell process set up...........................................................................31 3.3 Contact pad process set up.............................................................50 Chapter4 DC Characteristics of RDC flash device...................................................54 4.1 DC I-V characteristics.......................................................................54 4.2 Failure analysis.................................................................................57 4.3 New integration design...................................................................63 Chapter5 Conclusion ...........................................................................................67 Bibliography..........................................................................................68 Abstract in Korean................................................................................70Maste
    • โ€ฆ
    corecore