627 research outputs found

    ๊ธฐ๊ธฐ ์ƒ์—์„œ์˜ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ๊ฐœ์ธํ™” ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. Egger, Bernhard.There exist several deep neural network (DNN) architectures suitable for embedded inference, however little work has focused on training neural networks on-device. User customization of DNNs is desirable due to the difficulty of collecting a training set representative of real world scenarios. Additionally, inter-user variation means that a general model has a limitation on its achievable accuracy. In this thesis, a DNN architecture that allows for low power on-device user customization is proposed. This approach is applied to handwritten character recognition of both the Latin and the Korean alphabets. Experiments show a 3.5-fold reduction of the prediction error after user customization for both alphabets compared to a DNN trained with general data. This architecture is additionally evaluated using a number of embedded processors demonstrating its practical application.๋‚ด์žฅํ˜• ๊ธฐ๊ธฐ์—์„œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ๋Š” ์•„ํ‚คํ…์ฒ˜๋“ค์€ ์กด์žฌํ•˜์ง€๋งŒ ๋‚ด์žฅํ˜• ๊ธฐ๊ธฐ์—์„œ ์‹ ๊ฒฝ๋ง์„ ํ•™์Šตํ•˜๋Š” ์—ฐ๊ตฌ๋Š” ๋ณ„๋กœ ์ด๋ค„์ง€์ง€ ์•Š์•˜๋‹ค. ์‹ค์ œ ํ™˜๊ฒฝ์„ ๋ฐ˜์˜ํ•˜๋Š” ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๋ชจ์œผ๋Š” ๊ฒƒ์ด ์–ด๋ ต๊ณ  ์‚ฌ์šฉ์ž๊ฐ„์˜ ๋‹ค์–‘์„ฑ์œผ๋กœ ์ธํ•ด ์ผ๋ฐ˜์ ์œผ๋กœ ํ•™์Šต๋œ ๋ชจ๋ธ์ด ์ถฉ๋ถ„ํ•œ ์ •ํ™•๋„๋ฅผ ๊ฐ€์ง€๊ธฐ์—” ํ•œ๊ณ„๊ฐ€ ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์ด ํ•„์š”ํ•˜๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๊ธฐ๊ธฐ์ƒ์—์„œ ์ €์ „๋ ฅ์œผ๋กœ ์‚ฌ์šฉ์ž ๋งž์ถคํ™”๊ฐ€ ๊ฐ€๋Šฅํ•œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์€ ๋ผํ‹ด์–ด์™€ ํ•œ๊ธ€์˜ ํ•„๊ธฐ์ฒด ๊ธ€์ž ์ธ์‹์— ์ ์šฉ๋œ๋‹ค. ๋ผํ‹ด์–ด์™€ ํ•œ๊ธ€์— ์‚ฌ์šฉ์ž ๋งž์ถคํ™”๋ฅผ ์ ์šฉํ•˜์—ฌ ์ผ๋ฐ˜์ ์ธ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šตํ•œ ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง๋ณด๋‹ค 3.5๋ฐฐ๋‚˜ ์ž‘์€ ์˜ˆ์ธก ์˜ค๋ฅ˜์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์—ˆ๋‹ค. ๋˜ํ•œ ์ด ์•„ํ‚คํ…์ฒ˜์˜ ์‹ค์šฉ์„ฑ์„ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค์–‘ํ•œ ๋‚ด์žฅํ˜• ํ”„๋กœ์„ธ์„œ์—์„œ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค.Abstract i Contents iii List of Figures vii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Motivation 4 Chapter 3 Background 6 3.1 Deep Neural Networks 6 3.1.1 Inference 6 3.1.2 Training 7 3.2 Convolutional Neural Networks 8 3.3 On-Device Acceleration 9 3.3.1 Hardware Accelerators 9 3.3.2 Software Optimization 10 Chapter 4 Methodology 12 4.1 Initialization 13 4.2 On-Device Training 14 Chapter 5 Implementation 16 5.1 Pre-processing 16 5.2 Latin Handwritten Character Recognition 17 5.2.1 Dataset and BIE Selection 17 5.2.2 AE Design 17 5.3 Korean Handwritten Character Recognition 21 5.3.1 Dataset and BIE Selection 21 5.3.2 AE Design 21 Chapter 6 On-Device Acceleration 26 6.1 Architecure Optimizations 27 6.2 Compiler Optimizations 29 Chapter 7 Experimental Setup 30 Chapter 8 Evaluation 33 8.1 Latin Handwritten Character Recognition 33 8.2 Korean Handwritten Character Recognition 38 8.3 On-Device Acceleration 40 Chapter 9 Related Work 44 Chapter 10 Conclusion 47 Bibliography 47 ์š”์•ฝ 55 Acknowledgements 56Maste

    Brain MRI study for glioma segmentation using convolutional neural networks and original post-processing techniques with low computational demand

    Full text link
    Gliomas are brain tumors composed of different highly heterogeneous histological subregions. Image analysis techniques to identify relevant tumor substructures have high potential for improving patient diagnosis, treatment and prognosis. However, due to the high heterogeneity of gliomas, the segmentation task is currently a major challenge in the field of medical image analysis. In the present work, the database of the Brain Tumor Segmentation (BraTS) Challenge 2018, composed of multimodal MRI scans of gliomas, was studied. A segmentation methodology based on the design and application of convolutional neural networks (CNNs) combined with original post-processing techniques with low computational demand was proposed. The post-processing techniques were the main responsible for the results obtained in the segmentations. The segmented regions were the whole tumor, the tumor core, and the enhancing tumor core, obtaining averaged Dice coefficients equal to 0.8934, 0.8376, and 0.8113, respectively. These results reached the state of the art in glioma segmentation determined by the winners of the challenge.Comment: 34 pages, 12 tables, 23 figure

    Distinguishing artefacts:evaluating the saturation point of convolutional neural networks

    Get PDF
    Prior work has shown Convolutional Neural Networks (CNNs) trained on surrogate Computer Aided Design (CAD) models are able to detect and classify real-world artefacts from photographs. The applications of which support twinning of digital and physical assets in design, including rapid extraction of part geometry from model repositories, information search \& retrieval and identifying components in the field for maintenance, repair, and recording. The performance of CNNs in classification tasks have been shown dependent on training data set size and number of classes. Where prior works have used relatively small surrogate model data sets (<100<100 models), the question remains as to the ability of a CNN to differentiate between models in increasingly large model repositories. This paper presents a method for generating synthetic image data sets from online CAD model repositories, and further investigates the capacity of an off-the-shelf CNN architecture trained on synthetic data to classify models as class size increases. 1,000 CAD models were curated and processed to generate large scale surrogate data sets, featuring model coverage at steps of 10โˆ˜^{\circ}, 30โˆ˜^{\circ}, 60โˆ˜^{\circ}, and 120โˆ˜^{\circ} degrees. The findings demonstrate the capability of computer vision algorithms to classify artefacts in model repositories of up to 200, beyond this point the CNN's performance is observed to deteriorate significantly, limiting its present ability for automated twinning of physical to digital artefacts. Although, a match is more often found in the top-5 results showing potential for information search and retrieval on large repositories of surrogate models.Comment: 6 Pages, 5 Figures, 2 Tables, Conference, Design Engineering, CNN, Digital Twi
    • โ€ฆ
    corecore