44 research outputs found
SymbolDesign: A User-centered Method to Design Pen-based Interfaces and Extend the Functionality of Pointer Input Devices
A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.National Science Foundation (IIS-0093367, IIS-0308213, IIS-0329009, EIA-0202067
Sparse Convolutional Neural Network for Handwriting Recognition
ํ์๋
ผ๋ฌธ (์์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2017. 8. ์ฅ๋ณํ.์๋ํ๋ ๋ฌธ์ ์ธ์๊ธฐ๋ ์ฐํธ๋ฌผ ๋ถ๋ฅ์ ์๋ํ, ๋ฒํธํ ์ธ์, ์ ์ ๋ฉ๋ชจ์ฅ ๋ฑ ๋ค์ํ ์ฐ์
๋ถ์ผ์์ ๊ทธ ์์๊ฐ ๊ธฐํ๊ธ์์ ์ผ๋ก ์ฆ๊ฐํ๊ณ ์๋ค. ์ด์ ๊ด๋ จํ์ฌ ์ต๊ทผ ์ด๋ฏธ์ง ์ธ์๋ถ์ผ์์ ๋ฐ์ด๋ ์ฑ๋ฅ์ ๋ณด์ด๋ ์ปจ๋ณผ๋ฃจ์
์ธ๊ณต ์ ๊ฒฝ๋ง(CNN)์ ์ฌ์ฉํ ๋ฐฉ๋ฒ๋ค์ด ํ๊ธฐ์ฒด ์ธ์ ๋ถ์ผ์ ์ ์ฉ๋๊ณ ์๋ค. ์ด๋ฌํ ์ฐ๊ตฌ๋ค ๋๋ถ๋ถ์์๋ ๋์ ์ธ์๋ฅ ์ ๋ฌ์ฑํ๊ธฐ ์ํด ์ฃผ๋ก ๊น์ ๊ตฌ์กฐ์ CNN์ ์ฌ์ฉํ์๋ค. ํ์ง๋ง ํ๊ธฐ์ฒด ์ธ์ ๋ถ์ผ์์๋ ์ฃผ๋ก ์ค๋งํธํฐ์ด๋ ํ๋ธ๋ฆฟ PC ๋ฑ ์์์ด ์ ํ๋์ด์๋ ๋จ๋ง๊ธฐ๊ฐ ์ฃผ๋ก ์ฌ์ฉ๋๋ฏ๋ก ๋ชจ๋ธ์ด ์ฐจ์งํ๋ ๋ฉ๋ชจ๋ฆฌ์ ๊ณ์ฐ์๋ ์ญ์ ์ค์ํ๊ฒ ๊ณ ๋ ค๋์ด์ผ ํ๋ค. ์ด์ ๋ณธ ๋
ผ๋ฌธ์์๋ ํ์ต ๋ณ์์ ์๋ฅผ ํจ๊ณผ์ ์ผ๋ก ์ค์ด๊ธฐ ์ํด ์ธ์
์
๋ชจ๋ ๊ธฐ๋ฐ์ ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์ ํ๊ธ ํ๊ธฐ์ฒด ์ธ์๋ฌธ์ ์ ์ ์ฉํ์๋ค. ๋ํ ์ผ๋ฐํ ์ค๋ฅ๋ฅผ ๋ฎ์ถ์ด ์ข ๋ ๋์ ์ธ์๋ฅ ์ ๋ฌ์ฑํ๊ธฐ ๋๋ํํฐ ๊ธฐ๋ฒ์ ์ฌ์ฉํ์ฌ ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์ ํฌ์ํ ์ฑ์ง์ ๊ฐ์ง๋๋ก ํ์ต์์ผฐ๋ค. ์ธ์
์
๋ชจ๋์ Imagenet Large Scale Visual Recognition Challenge 2014์์ ์ต๊ณ ์ ์ธ์๋ฅ ์ ๋ฌ์ฑํ๋ฉด์๋ ๊ธฐ์กด์ ๋ชจ๋ธ์ ๋นํด 12๋ฐฐ ์ ์ ํ๋ผ๋ฏธํฐ๋ฅผ ์ฌ์ฉํ์ฌ ํฌ๊ฒ ์ฃผ๋ชฉ๋ฐ์ GoogLeNet์ ํต์ฌ ๋ชจ๋์ด๋ฉฐ, ๋๋ํํฐ๋ ์ต๊ทผ ๋๋ฆฌ ์ฌ์ฉ๋๋ regularization ๊ธฐ๋ฒ์ ์ผ์ข
์ธ ๋๋์์์ CNN์ ์ ํฉํ๊ฒ ๋ณํ๋ฅผ ์ค ๊ธฐ๋ฒ์ด๋ค. ์คํ์ ์ฐ์ CNN์์ ๋๋ํํฐ์ ํจ๊ณผ๋ฅผ ๊ฒ์ฆํ๊ฒ ์ํด 10๊ฐ ํด๋์ค, ์ด 60,000์ฅ์ ์์ฐ ์ด๋ฏธ์ง๋ก ๊ตฌ์ฑ๋ Canadian Institute for Advanced Research(CIFAR)-10 ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ๋๋์์์ ์ ์ฉํ ๋ชจ๋ธ๊ณผ ์ธ์๋ฅ ๋น๊ต๋ฅผ ์ํํ์๋ค. ๊ฒ์ฆ ์คํ์ ํตํด ๋๋ํํฐ ๊ธฐ๋ฒ์ด CNN์ ์ ์ฉ๋์์ ๋ ๋๋์์๋ณด๋ค ์ผ๋ฐํ ์ค๋ฅ๋ฅผ ๋ฎ์ถ๋๋ฐ ๋ ๋ฐ์ด๋จ์ ํ์ธํ ์ ์์๋ค. ๋ํ ๊ฒ์ฆ ์คํ ์ค ๊ฐ ์๋ ์ธต๋ง๋ค ๋๋ํํฐ์ ํจ๊ณผ๊ฐ ๋ค๋ฅด๋ค๋ ๊ฒ์ ๋ฐ๊ฒฌํ๊ณ ์ด์ ๋ํ ์ถ๊ฐ์ ์ธ ๊ฒ์ฆ ์คํ์ ์ํํ์๋ค. ์ดํ ๋๋ํํฐ๋ฅผ ์ธ์
์
๋ชจ๋์ ๊ธฐ๋ฐํ์ฌ ๊ตฌ์ฑ๋ CNN์ ์ ์ฉํ ๋ค ํ๊ธ ํ๊ธฐ์ฒด ์ธ์์ ์ํํ์๋ค. ์คํ์ ์ฌ์ฉํ ๋ฐ์ดํฐ๋ ์ด 520ํด๋์ค, 260,000 ๊ธ์์ ํ๊ธ ๋ฑ๊ธ์๋ก ์ด๋ฃจ์ด์ ธ ์๋ค. ํ๊ธ ํ๊ธฐ์ฒด ์ธ์ ์คํ ๊ฒฐ๊ณผ ์ ์ํ๋ ๋ชจ๋ธ์ธ ๋๋ํํฐ๋ฅผ ์ ์ฉํ ์ธ์
์
๋ชจ๋ ๊ธฐ๋ฐ์ CNN์ด ๊ธฐ์กด์ LeNet ๊ตฌ์กฐ์ CNN์ ๋นํด 3๋ฐฐ ๋ ์ ์ ํ์ต๋ณ์๋ก๋ 3.279% ๋์ ์ธ์๋ฅ ์ ๋ฌ์ฑํ์๋ค.I. ์ ๋ก 1
1. ์ฐ๊ตฌ์ ํ์์ฑ ๋ฐ ๋ชฉ์ 1
2. ์ฐ๊ตฌ ๋ฌธ์ 5
II. ๊ด๋ จ ์ฐ๊ตฌ 6
1. ์ปจ๋ณผ๋ฃจ์
์ธ๊ณต ์ ๊ฒฝ๋ง 6
1.1. ์ปจ๋ณผ๋ฃจ์
์ฐ์ฐ์ ์ ์ 6
1.2. ์ปจ๋ณผ๋ฃจ์
์ธ๊ณต ์ ๊ฒฝ๋ง 7
2. ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์ ์ฌ์ฉํ ํ๊ธ ํ๊ธฐ์ฒด ์ธ์ 8
3. ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์ ๋ค์ํ ๊ตฌ์กฐ 8
3.1. Residual Network ๊ตฌ์กฐ 9
3.2. GoogLeNet ๊ตฌ์กฐ 10
4. ์ธ๊ณต ์ ๊ฒฝ๋ง์ Regularization 12
4.1. ๋ค์ธต ํผ์
ํธ๋ก ์์์ Regularization 12
4.2. ์ปจ๋ณผ๋ฃจ์
์ธ๊ณต ์ ๊ฒฝ๋ง์์์ Regularization 12
III. ์ ์ํ๋ ๋ชจ๋ธ 14
1. ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์์์ ๋๋์์ 14
2. ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง์์์ ๋๋ํํฐ 17
3. ๋๋ํํฐ๊ฐ ์ ์ฉ๋ ์ธ์
์
๋ชจ๋ 20
IV. ์คํ ๋ฐ ํ๊ธฐ์ฒด ์ธ์ ๊ฒฐ๊ณผ ๋ถ์ 21
1. ๋ฐ์ดํฐ ๋ช
์ธ 21
2. ๋๋ํํฐ์ ํจ๊ณผ ๋ถ์ 23
3. ํ๊ธฐ์ฒด ์ธ์ ๊ฒฐ๊ณผ ๋ฐ ๋ถ์ 28
4. ๊ธฐํ ๋
ผ์์ฌํญ 32
V. ๊ฒฐ ๋ก 33
์ฐธ๊ณ ๋ฌธํ 34
์๋ฌธ์์ฝ 38Maste
A character-recognition system for Hangeul
This work presents a rule-based character-recognition system for the Korean script, Hangeul. An input raster image representing one Korean character (Hangeul syllable) is thinned down to a skeleton, and the individual lines extracted. The lines, along with information on how they are interconnected, are translated into a set of hierarchical graphs, which can be easily traversed and compared with a set of reference structures represented in the same way. Hangeul consists of consonant and vowel graphemes, which are combined into blocks representing syllables. Each reference structure describes one possible variant of such a grapheme. The reference structures that best match the structures found in the input are combined to form a full Hangeul syllable. Testing all of the 11 172 possible characters, each rendered as a 200-pixel-squared raster image using the gothic font AppleGothic Regular, had a recognition accuracy of 80.6 percent. No separation logic exists to be able to handle characters whose graphemes are overlapping or conjoined; with such characters removed from the set, thereby reducing the total number of characters to 9 352, an accuracy of 96.3 percent was reached. Hand-written characters were also recognised, to a certain degree. The work shows that it is possible to create a workable character-recognition system with reasonably simple means
Recommended from our members
Word based off-line handwritten Arabic classification and recognition. Design of automatic recognition system for large vocabulary offline handwritten Arabic words using machine learning approaches.
The design of a machine which reads unconstrained words still remains an unsolved problem. For example, automatic interpretation of handwritten documents by a computer is still under research. Most systems attempt to segment words into letters and read words one character at a time. However, segmenting handwritten words is very difficult. So to avoid this words are treated as a whole. This research investigates a number of features computed from whole words for the recognition of handwritten words in particular. Arabic text classification and recognition is a complicated process compared to Latin and Chinese text recognition systems. This is due to the nature cursiveness of Arabic text.
The work presented in this thesis is proposed for word based recognition of handwritten Arabic scripts. This work is divided into three main stages to provide a recognition system. The first stage is the pre-processing, which applies efficient pre-processing methods which are essential for automatic recognition of handwritten documents. In this stage, techniques for detecting baseline and segmenting words in handwritten Arabic text are presented. Then connected components are extracted, and distances between different components are analyzed. The statistical distribution of these distances is then obtained to determine an optimal threshold for word segmentation. The second stage is feature extraction. This stage makes use of the normalized images to extract features that are essential in recognizing the images. Various method of feature extraction are implemented and examined. The third and final stage is the classification. Various classifiers are used for classification such as K nearest neighbour classifier (k-NN), neural network classifier (NN), Hidden Markov models (HMMs), and the Dynamic Bayesian Network (DBN). To test this concept, the particular pattern recognition problem studied is the classification of 32492 words using
ii
the IFN/ENIT database. The results were promising and very encouraging in terms of improved baseline detection and word segmentation for further recognition. Moreover, several feature subsets were examined and a best recognition performance of 81.5% is achieved
Multi-domain sketch understanding
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 121-128).by Christine J. Alvarado.Ph.D