30 research outputs found
κΈ°κΈ° μμμμ μ¬μΈ΅ μ κ²½λ§ κ°μΈν λ°©λ²
νμλ
Όλ¬Έ (μμ¬)-- μμΈλνκ΅ λνμ : 곡과λν μ»΄ν¨ν°κ³΅νλΆ, 2019. 2. Egger, Bernhard.There exist several deep neural network (DNN) architectures suitable for embedded inference, however little work has focused on training neural networks on-device.
User customization of DNNs is desirable due to the difficulty of collecting a training set representative of real world scenarios.
Additionally, inter-user variation means that a general model has a limitation on its achievable accuracy.
In this thesis, a DNN architecture that allows for low power on-device user customization is proposed.
This approach is applied to handwritten character recognition of both the Latin and the Korean alphabets.
Experiments show a 3.5-fold reduction of the prediction error after user customization for both alphabets compared to a DNN trained with general data.
This architecture is additionally evaluated using a number of embedded processors demonstrating its practical application.λ΄μ₯ν κΈ°κΈ°μμ μ¬μΈ΅ μ κ²½λ§μ μΆλ‘ ν μ μλ μν€ν
μ²λ€μ μ‘΄μ¬νμ§λ§ λ΄μ₯ν κΈ°κΈ°μμ μ κ²½λ§μ νμ΅νλ μ°κ΅¬λ λ³λ‘ μ΄λ€μ§μ§ μμλ€. μ€μ νκ²½μ λ°μνλ νμ΅μ© λ°μ΄ν° μ§ν©μ λͺ¨μΌλ κ²μ΄ μ΄λ ΅κ³ μ¬μ©μκ°μ λ€μμ±μΌλ‘ μΈν΄ μΌλ°μ μΌλ‘ νμ΅λ λͺ¨λΈμ΄ μΆ©λΆν μ νλλ₯Ό κ°μ§κΈ°μ νκ³κ° μ‘΄μ¬νκΈ° λλ¬Έμ μ¬μ©μ λ§μΆ€ν μ¬μΈ΅ μ κ²½λ§μ΄ νμνλ€. μ΄ λ
Όλ¬Έμμλ κΈ°κΈ°μμμ μ μ λ ₯μΌλ‘ μ¬μ©μ λ§μΆ€νκ° κ°λ₯ν μ¬μΈ΅ μ κ²½λ§ μν€ν
μ²λ₯Ό μ μνλ€. μ΄λ¬ν μ κ·Ό λ°©λ²μ λΌν΄μ΄μ νκΈμ ν기체 κΈμ μΈμμ μ μ©λλ€. λΌν΄μ΄μ νκΈμ μ¬μ©μ λ§μΆ€νλ₯Ό μ μ©νμ¬ μΌλ°μ μΈ λ°μ΄ν°λ‘ νμ΅ν μ¬μΈ΅ μ κ²½λ§λ³΄λ€ 3.5λ°°λ μμ μμΈ‘ μ€λ₯μ κ²°κ³Όλ₯Ό μ»μλ€. λν μ΄ μν€ν
μ²μ μ€μ©μ±μ 보μ¬μ£ΌκΈ° μνμ¬ λ€μν λ΄μ₯ν νλ‘μΈμμμ μ€νμ μ§ννμλ€.Abstract i
Contents iii
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Motivation 4
Chapter 3 Background 6
3.1 Deep Neural Networks 6
3.1.1 Inference 6
3.1.2 Training 7
3.2 Convolutional Neural Networks 8
3.3 On-Device Acceleration 9
3.3.1 Hardware Accelerators 9
3.3.2 Software Optimization 10
Chapter 4 Methodology 12
4.1 Initialization 13
4.2 On-Device Training 14
Chapter 5 Implementation 16
5.1 Pre-processing 16
5.2 Latin Handwritten Character Recognition 17
5.2.1 Dataset and BIE Selection 17
5.2.2 AE Design 17
5.3 Korean Handwritten Character Recognition 21
5.3.1 Dataset and BIE Selection 21
5.3.2 AE Design 21
Chapter 6 On-Device Acceleration 26
6.1 Architecure Optimizations 27
6.2 Compiler Optimizations 29
Chapter 7 Experimental Setup 30
Chapter 8 Evaluation 33
8.1 Latin Handwritten Character Recognition 33
8.2 Korean Handwritten Character Recognition 38
8.3 On-Device Acceleration 40
Chapter 9 Related Work 44
Chapter 10 Conclusion 47
Bibliography 47
μμ½ 55
Acknowledgements 56Maste
On the Ability of a CNN to Realize Image-to-Image Language Conversion
The purpose of this paper is to reveal the ability that Convolutional Neural
Networks (CNN) have on the novel task of image-to-image language conversion. We
propose a new network to tackle this task by converting images of Korean Hangul
characters directly into images of the phonetic Latin character equivalent. The
conversion rules between Hangul and the phonetic symbols are not explicitly
provided. The results of the proposed network show that it is possible to
perform image-to-image language conversion. Moreover, it shows that it can
grasp the structural features of Hangul even from limited learning data. In
addition, it introduces a new network to use when the input and output have
significantly different features.Comment: Published at ICDAR 201
A fine-grained approach to scene text script identification
This paper focuses on the problem of script identification in unconstrained
scenarios. Script identification is an important prerequisite to recognition,
and an indispensable condition for automatic text understanding systems
designed for multi-language environments. Although widely studied for document
images and handwritten documents, it remains an almost unexplored territory for
scene text images.
We detail a novel method for script identification in natural images that
combines convolutional features and the Naive-Bayes Nearest Neighbor
classifier. The proposed framework efficiently exploits the discriminative
power of small stroke-parts, in a fine-grained classification framework.
In addition, we propose a new public benchmark dataset for the evaluation of
joint text detection and script identification in natural scenes. Experiments
done in this new dataset demonstrate that the proposed method yields state of
the art results, while it generalizes well to different datasets and variable
number of scripts. The evidence provided shows that multi-lingual scene text
recognition in the wild is a viable proposition. Source code of the proposed
method is made available online
Segmentation-Free Korean Handwriting Recognition Using Neural Network Training
The idea of segmentation-free handwriting recognition has been introduced within the rise of deep learning. This technique is designed to recognize any script language/symbols as long as feedable training image set exists. The VGG-16 convolutional neural network model is used as a character spotting network using Faster R-CNN. Through the process of manual tagging, the location, size, and types of recognizable symbols are provided to train the network. This approach has been tested previously on text written in the Bangla script, where it has shown over 90% of accuracy overall. For Bangla, the network is trained and tested on Boise State Bangla Handwriting dataset. For Korean, the network is trained using the PE_92 Handwritten Korean character image database and shows promising results
Transliteration of Hiragana and Katakana Handwritten Characters Using CNN-SVM
Hiragana and katakana handwritten characters are often used when writing words in Japanese. Japanese itself is often used by native Japanese as well as people learning Japanese around the world. Hiragana and katakana characters themselves are difficult to learn because many characters are similar to one another. In this study, hiragana and basic katakana, dakuten, handakuten, and youon were used, which were taken from the respondents using a questionnaire. This study used the CNN method which will be compared with a combination of the CNN and SVM methods which have been designed to identify each character that has been prepared. Preprocessing of character images uses the methods of image resizing, grayscaling, binarization, dilation, and erosion. The preprocessed results will be input for CNN as a feature extraction tool and SVM as a tool for character recognition. The results of this study obtained accuracy with the following parameters: 69Γ69 image size, 3 patience values, val_loss monitor callbacks, Nadam optimization function, 0.001 learning rate value, 30 epochs value, and SVM RBF kernel. If using a system that only uses the CNN network, the accuracy is 87.82%. The results obtained when using a combination of CNN and SVM were 88.21%