756 research outputs found

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Handwritten Devanagari numeral recognition

    Get PDF
    Optical character recognition (OCR) plays a very vital role in today’s modern world. OCR can be useful for solving many complex problems and thus making human’s job easier. In OCR we give a scanned digital image or handwritten text as the input to the system. OCR can be used in postal department for sorting of the mails and in other offices. Much work has been done for English alphabets but now a day’s Indian script is an active area of interest for the researchers. Devanagari is on such Indian script. Research is going on for the recognition of alphabets but much less concentration is given on numerals. Here an attempt was made for the recognition of Devanagari numerals. The main part of any OCR system is the feature extraction part because more the features extracted more is the accuracy. Here two methods were used for the process of feature extraction. One of the method was moment based method. There are many moment based methods but we have preferred the Tchebichef moment. Tchebichef moment was preferred because of its better image representation capability. The second method was based on the contour curvature. Contour is a very important boundary feature used for finding similarity between shapes. After the process of feature extraction, the extracted feature has to be classified and for the same Artificial Neural Network (ANN) was used. There are many classifier but we preferred ANN because it is easy to handle and less error prone and apart from that its accuracy is much higher compared to other classifier. The classification was done individually with the two extracted features and finally the features were cascaded to increase the accuracy

    Off-line Arabic Handwriting Recognition System Using Fast Wavelet Transform

    Get PDF
    In this research, off-line handwriting recognition system for Arabic alphabet is introduced. The system contains three main stages: preprocessing, segmentation and recognition stage. In the preprocessing stage, Radon transform was used in the design of algorithms for page, line and word skew correction as well as for word slant correction. In the segmentation stage, Hough transform approach was used for line extraction. For line to words and word to characters segmentation, a statistical method using mathematic representation of the lines and words binary image was used. Unlike most of current handwriting recognition system, our system simulates the human mechanism for image recognition, where images are encoded and saved in memory as groups according to their similarity to each other. Characters are decomposed into a coefficient vectors, using fast wavelet transform, then, vectors, that represent a character in different possible shapes, are saved as groups with one representative for each group. The recognition is achieved by comparing a vector of the character to be recognized with group representatives. Experiments showed that the proposed system is able to achieve the recognition task with 90.26% of accuracy. The system needs only 3.41 seconds a most to recognize a single character in a text of 15 lines where each line has 10 words on average

    Recognition of Characters from Streaming Videos

    Get PDF
    Non

    Advanced document data extraction techniques to improve supply chain performance

    Get PDF
    In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information

    Online Handwritten Chinese/Japanese Character Recognition

    Get PDF

    Rapid Resource Transfer for Multilingual Natural Language Processing

    Get PDF
    Until recently the focus of the Natural Language Processing (NLP) community has been on a handful of mostly European languages. However, the rapid changes taking place in the economic and political climate of the world precipitate a similar change to the relative importance given to various languages. The importance of rapidly acquiring NLP resources and computational capabilities in new languages is widely accepted. Statistical NLP models have a distinct advantage over rule-based methods in achieving this goal since they require far less manual labor. However, statistical methods require two fundamental resources for training: (1) online corpora (2) manual annotations. Creating these two resources can be as difficult as porting rule-based methods. This thesis demonstrates the feasibility of acquiring both corpora and annotations by exploiting existing resources for well-studied languages. Basic resources for new languages can be acquired in a rapid and cost-effective manner by utilizing existing resources cross-lingually. Currently, the most viable method of obtaining online corpora is converting existing printed text into electronic form using Optical Character Recognition (OCR). Unfortunately, a language that lacks online corpora most likely lacks OCR as well. We tackle this problem by taking an existing OCR system that was desgined for a specific language and using that OCR system for a language with a similar script. We present a generative OCR model that allows us to post-process output from a non-native OCR system to achieve accuracy close to, or better than, a native one. Furthermore, we show that the performance of a native or trained OCR system can be improved by the same method. Next, we demonstrate cross-utilization of annotations on treebanks. We present an algorithm that projects dependency trees across parallel corpora. We also show that a reasonable quality treebank can be generated by combining projection with a small amount of language-specific post-processing. The projected treebank allows us to train a parser that performs comparably to a parser trained on manually generated data

    Melhorando a precisão do reconhecimento de texto usando técnicas baseadas em sintaxe

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, Marcio Machado PereiraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Devido à grande quantidade de informações visuais disponíveis atualmente, a detecção e o reconhecimento de texto em imagens de cenas naturais começaram a ganhar importância nos últimos tempos. Seu objetivo é localizar regiões da imagem onde há texto e reconhecê-lo. Essas tarefas geralmente são divididas em duas partes: detecção de texto e reconhecimento de texto. Embora as técnicas para resolver esse problema tenham melhorado nos últimos anos, o uso excessivo de recursos de hardware e seus altos custos computacionais impactaram significativamente a execução de tais tarefas em sistemas integrados altamente restritos (por exemplo, celulares e TVs inteligentes). Embora existam métodos de detecção e reconhecimento de texto executados em tais sistemas, eles não apresentam bom desempenho quando comparados à soluções de ponta em outras plataformas de computação. Embora atualmente existam vários métodos de pós-correção que melhoram os resultados em documentos históricos digitalizados, há poucas explorações sobre o seu uso nos resultados de imagens de cenas naturais. Neste trabalho, exploramos um conjunto de métodos de pós-correção, bem como propusemos novas heuríticas para melhorar os resultados em imagens de cenas naturais, tendo como base de prototipação o software de reconhecimento de textos Tesseract. Realizamos uma análise com os principais métodos disponíveis na literatura para correção dos erros e encontramos a melhor combinação que incluiu os métodos de substituição, eliminação nos últimos caracteres e composição. Somado a isto, os resultados mostraram uma melhora quando introduzimos uma nova heurística baseada na frequência com que os possíveis resultados aparecem em bases de dados de magazines, jornais, textos de ficção, web, etc. Para localizar erros e evitar overcorrection foram consideradas diferentes restrições obtidas através do treinamento da base de dados do Tesseract. Selecionamos como melhor restrição a incerteza do melhor resultado obtido pelo Tesseract. Os experimentos foram realizados com sete banco de dados usados em sites de competição na área, considerando tanto banco de dados para desafio em reconhecimento de texto e aqueles com o desafio de detecção e reconhecimento de texto. Em todos os bancos de dados, tanto nos dados de treinamento como de testes, os resultados do Tesseract com o método proposto de pós-correção melhorou consideravelmente em comparação com os resultados obtidos somente com o TesseractAbstract: Due to a large amount of visual information available today, Text Detection and Recognition in scene images have begun to receive an increasing importance. The goal of this task is to locate regions of the image where there is text and recognize them. Such tasks are typically divided into two parts: Text Detection and Text Recognition. Although the techniques to solve this problem have improved in recent years, the excessive usage of hardware resources and its corresponding high computational costs have considerably impacted the execution of such tasks in highly constrained embedded systems (e.g., cellphones and smart TVs). Although there are Text Detection and Recognition methods that run in such systems they do not have good performance when compared to state-of-the-art solutions in other computing platforms. Although there are currently various post-correction methods to improve the results of scanned documents, there is a little effort in applying them on scene images. In this work, we explored a set of post-correction methods, as well as proposed new heuristics to improve the results in scene images, using the Tesseract text recognition software as a prototyping base. We performed an analysis with the main methods available in the literature to correct errors and found the best combination that included the methods of substitution, elimination in the last characters, and compounder. In addition, results showed an improvement when we introduced a new heuristic based on the frequency with which the possible results appear in the frequency databases for categories such as magazines, newspapers, fiction texts, web, etc. In order to locate errors and avoid overcorrection, different restrictions were considered through Tesseract with the training database. We selected as the best restriction the certainty of the best result obtained by Tesseract. The experiments were carried out with seven databases used in Text Recognition and Text Detection/Recognition competitions. In all databases, for both training and testing, the results of Tesseract with the proposed post-correction method considerably improved when compared to the results obtained only with TesseractMestradoCiência da ComputaçãoMestra em Ciência da Computação4716-1488887.335287/2019-00, 1774549FuncampCAPE

    Distance maps between Japanese kanji characters based on hierarchical optimal transport

    Full text link
    We introduce a general framework for assigning distances between kanji based on their dissimilarity. What we mean by this term may depend on the concrete application. The only assumption we make is that the dissimilarity between two kanji is adequately expressed as a weighted mean of penalties obtained from matching nested structures of components in an optimal way. For the cost of matching, we suggest a number of modules that can be freely combined or replaced with other modules, including the relative unbalanced ink transport between registered components, the distance between the transformations required for registration, and the difference in prespecified labels. We give a concrete example of a kanji distance function obtained in this way as a proof of concept. Based on this function, we produce 2D kanji maps by multidimensional scaling and a table of 100 randomly selected J\=oj\=o kanji with their 16 nearest neighbors. Our kanji distance functions can be used to help Japanese learners from non-CJK backgrounds acquire kanji literacy. In addition, they may assist editors of kanji dictionaries in presenting their materials and may serve in text processing and optical character recognition systems for assessing the likelihood of errors.Comment: 24 pages, 5 figure
    corecore