438 research outputs found

    Online Japanese Character Recognition Using Trajectory-Based Normalization and Direction Feature Extraction

    Get PDF
    http://www.suvisoft.comThis paper describes an online Japanese character recognition system using advanced techniques of pattern normalization and direction feature extraction. The normalization of point coordinates and the decomposition of direction elements are directly performed on online trajectory, and therefore, are computationally efficient. We compare one-dimensional and pseudo two-dimensional (pseudo 2D) normalization methods, as well as direction features from original pattern and from normalized pattern. In experiments on the TUAT HANDS databases, the pseudo 2D normalization methods yielded superior performance, while direction features from original pattern and from normalized pattern made little difference

    Recognition of handwritten Chinese characters by combining regularization, Fisher's discriminant and distorted sample generation

    Get PDF
    Proceedings of the 10th International Conference on Document Analysis and Recognition, 2009, p. 1026–1030The problem of offline handwritten Chinese character recognition has been extensively studied by many researchers and very high recognition rates have been reported. In this paper, we propose to further boost the recognition rate by incorporating a distortion model that artificially generates a huge number of virtual training samples from existing ones. We achieve a record high recognition rate of 99.46% on the ETL-9B database. Traditionally, when the dimension of the feature vector is high and the number of training samples is not sufficient, the remedies are to (i) regularize the class covariance matrices in the discriminant functions, (ii) employ Fisher's dimension reduction technique to reduce the feature dimension, and (iii) generate a huge number of virtual training samples from existing ones. The second contribution of this paper is the investigation of the relative effectiveness of these three methods for boosting the recognition rate. © 2009 IEEE.published_or_final_versio

    Obtaining n best alternatives for classifying Unicode symbols

    Full text link
    The Unicode character set has been increased in last years until grouping more than 100000 characters. We developed a classifier which can predict the n most probable solutions to a given handwritten character in a smaller Unicode set. Even with the size reduction we still have a classification problem with a big number of classes (5488 in total) without any training sample. Before dealing with this problem we performed some experiments on the UJI PEN dataset. In these experiments we used two different data generation techniques, distortions and variational autoencoders as generative models. We tried feature extraction methods with both offline and online data. The generation along with the feature extraction was tested in several models of neural networks like convolutional networks or LSTM.El conjunto de caracteres Unicode se ha incrementado en los últimos años hasta llegar a agrupar más de 100000 caracteres. Hemos desarrollado un clasificador que puede predecir las n clases más probables de un carácter escrito a mano perteneciente a un conjunto más pequeño de caracteres Unicode. Incluso con la reducción de tamaño todavía tenemos un problema de clasificación con muchas clases (5488 en total) sin ninguna muestra de entrenamiento. Antes de tratar este problema hemos realizado algunos experimentos con el corpus UJI PEN. En estos experimentos hemos utilizado dos técnicas de generación de datos, distorsiones y el uso devariational autoencoders como modelos generativos. Hemos probado diferentes métodos de extracción de características tanto con datos offline como con datos online. La generación y la extracción de características han sido probadas en diferentes modelos de redes neuronales como las redes convolucionales o las LSTM.Vieco Pérez, J. (2017). Obtención de las n mejores alternativas para clasificación de símbolos unicode. http://hdl.handle.net/10251/86238TFG

    Histograms of Points, Orientations, and Dynamics of Orientations Features for Hindi Online Handwritten Character Recognition

    Full text link
    A set of features independent of character stroke direction and order variations is proposed for online handwritten character recognition. A method is developed that maps features like co-ordinates of points, orientations of strokes at points, and dynamics of orientations of strokes at points spatially as a function of co-ordinate values of the points and computes histograms of these features from different regions in the spatial map. Different features like spatio-temporal, discrete Fourier transform, discrete cosine transform, discrete wavelet transform, spatial, and histograms of oriented gradients used in other studies for training classifiers for character recognition are considered. The classifier chosen for classification performance comparison, when trained with different features, is support vector machines (SVM). The character datasets used for training and testing the classifiers consist of online handwritten samples of 96 different Hindi characters. There are 12832 and 2821 samples in training and testing datasets, respectively. SVM classifiers trained with the proposed features has the highest classification accuracy of 92.9\% when compared to the performances of SVM classifiers trained with the other features and tested on the same testing dataset. Therefore, the proposed features have better character discriminative capability than the other features considered for comparison.Comment: 21 pages, 12 jpg figure

    Chinese calligraphy: character style recognition based on full-page document

    Full text link
    Calligraphy plays a very important role in the history of China. From ancient times to modern times, the beauty of calligraphy has been passed down to the present. Different calligraphy styles and structures have made calligraphy a beauty and embodiment in the field of writing. However, the recognition of calligraphy style and fonts has always been a blank in the computer field. The structural complexity of different calligraphy also brings a lot of challenges to the recognition technology of computers. In my research, I mainly discussed some of the main recognition techniques and some popular machine learning algorithms in this field for more than 20 years, trying to find a new method of Chinese calligraphy styles recognition and exploring its feasibility. In our research, we searched for research papers 20 years ago. Most of the results are about the content recognition of modern Chinese characters. At first, we analyze the development of Chinese characters and the basic Chinese character theory. In the analysis of the current recognition of Chinese characters (including handwriting online and offline) in the computer field, it is more important to analyze various algorithms and results, and to analyze how to use the experimental data, besides how they construct the data set used for their test. The research on the method of image processing based on Chinese calligraphy works is very limited, and the data collection for calligraphy test is very limited also. The test of dataset that used between different recognition technologies is also very different. However, it has far-reaching significance for inheriting and carrying forward the traditional Chinese culture. It is very necessary to develop and promote the recognition of Chinese characters by means of computer tecnchque. In the current application field, the font recognition of Chinese calligraphy can effectively help the library administrators to identify the problem of the classification of the copybook, thus avoiding the recognition of the calligraphy font which is difficult to perform manually only through subjective experience. In the past 10 years of technology, some techniques for the recognition of single Chinese calligraphy fonts have been given. Most of them are the pre-processing of calligraphy characters, the extraction of stroke primitives, the extraction of style features, and the final classification of machine learning. The probability of the classification of the calligraphy works. Such technical requirements are very large for complex Chinese characters, the result of splitting and recognition is very large, and it is difficult to accurately divide many complex font results. As a result, the recognition rate is low, or the accuracy of recognition of a specific word is high, but the overall font recognition accuracy is low. We understand that Chinese calligraphy is a certain research value. In the field of recognition, many research papers on the analysis of Chinese calligraphy are based on the study of calligraphy and stroke. However, we have proposed a new method for dealing with font recognition. The recognition technology is based on the whole page of the document. It is studied in three steps: the first step is to use Fourier transform and some Chinese calligraphy images and analyze the results. The second is that CNN is based on different data sets to get some results. Finally, we made some improvements to the CNN structure. The experimental results of the thesis show that the full-page documents recognition method proposed can achieve high accuracy with the support of CNN technology, and can effectively identify the different styles of Chinese calligraphy in 5 styles. Compared with the traditional analysis methods, our experimental results show that the method based on the full-page document is feasible, avoiding the cumbersome font segmentation problem. This is more efficient and more accurate

    Offline printed Arabic character recognition

    Get PDF
    Optical Character Recognition (OCR) shows great potential for rapid data entry, but has limited success when applied to the Arabic language. Normal OCR problems are compounded by the right-to-left nature of Arabic and because the script is largely connected. This research investigates current approaches to the Arabic character recognition problem and innovates a new approach. The main work involves a Haar-Cascade Classifier (HCC) approach modified for the first time for Arabic character recognition. This technique eliminates the problematic steps in the pre-processing and recognition phases in additional to the character segmentation stage. A classifier was produced for each of the 61 Arabic glyphs that exist after the removal of diacritical marks. These 61 classifiers were trained and tested on an average of about 2,000 images each. A Multi-Modal Arabic Corpus (MMAC) has also been developed to support this work. MMAC makes innovative use of the new concept of connected segments of Arabic words (PAWs) with and without diacritics marks. These new tokens have significance for linguistic as well as OCR research and applications and have been applied here in the post-processing phase. A complete Arabic OCR application has been developed to manipulate the scanned images and extract a list of detected words. It consists of the HCC to extract glyphs, systems for parsing and correcting these glyphs and the MMAC to apply linguistic constrains. The HCC produces a recognition rate for Arabic glyphs of 87%. MMAC is based on 6 million words, is published on the web and has been applied and validated both in research and commercial use

    Automated Assessment of the Aftermath of Typhoons Using Social Media Texts

    Full text link
    Disasters are one of the major threats to economics and human societies, causing substantial losses of human lives, properties and infrastructures. It has been our persistent endeavors to understand, prevent and reduce such disasters, and the popularization of social media is offering new opportunities to enhance disaster management in a crowd-sourcing approach. However, social media data is also characterized by its undue brevity, intense noise, and informality of language. The existing literature has not completely addressed these disadvantages, otherwise vast manual efforts are devoted to tackling these problems. The major focus of this research is on constructing a holistic framework to exploit social media data in typhoon damage assessment. The scope of this research covers data collection, relevance classification, location extraction and damage assessment while assorted approaches are utilized to overcome the disadvantages of social media data. Moreover, a semi-supervised or unsupervised approach is prioritized in forming the framework to minimize manual intervention. In data collection, query expansion strategy is adopted to optimize the search recall of typhoon-relevant information retrieval. Multiple filtering strategies are developed to screen the keywords and maintain the relevance to search topics in the keyword updates. A classifier based on a convolutional neural network is presented for relevance classification, with hashtags and word clusters as extra input channels to augment the information. In location extraction, a model is constructed by integrating Bidirectional Long Short-Time Memory and Conditional Random Fields. Feature noise correction layers and label smoothing are leveraged to handle the noisy training data. Finally, a multi-instance multi-label classifier identifies the damage relations in four categories, and the damage categories of a message are integrated with the damage descriptions score to obtain damage severity score for the message. A case study is conducted to verify the effectiveness of the framework. The outcomes indicate that the approaches and models developed in this study significantly improve in the classification of social media texts especially under the framework of semi-supervised or unsupervised learning. Moreover, the results of damage assessment from social media data are remarkably consistent with the official statistics, which demonstrates the practicality of the proposed damage scoring scheme

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    • …
    corecore