72 research outputs found

    Large vocabulary recognition for online Turkish handwriting with sublexical units

    Get PDF
    We present a system for large vocabulary recognition of online Turkish handwriting, using hidden Markov models. While using a traditional approach for the recognizer, we have identified and developed solutions for the main problems specific to Turkish handwriting recognition. First, since large amounts of Turkish handwriting samples are not available, the system is trained and optimized using the large UNIPEN dataset of English handwriting, before extending it to Turkish using a small Turkish dataset. The delayed strokes, which pose a significant source of variation in writing order due to the large number of diacritical marks in Turkish, are removed during preprocessing. Finally, as a solution to the high out-of-vocabulary rates encountered when using a fixed size lexicon in general purpose recognition, a lexicon is constructed from sublexical units (stems and endings) learned from a large Turkish corpus. A statistical bigram language model learned from the same corpus is also applied during the decoding process. The system obtains a 91.7% word recognition rate when tested on a small Turkish handwritten word dataset using a medium sized (1950 words) lexicon corresponding to the vocabulary of the test set and 63.8% using a large, general purpose lexicon (130,000 words). However, with the proposed stem+ending lexicon (12,500 words) and bigram language model with lattice expansion, a 67.9% word recognition accuracy is obtained, surpassing the results obtained with the general purpose lexicon while using a much smaller one

    Using Proximity and Tag Weights for Focused Retrieval in Structured Documents

    Get PDF
    International audienceFocused information retrieval is concerned with the retrieval of small units of information. In this context, the structure of the documents as well as the proximity among query terms have been found useful for improving retrieval effectiveness. In this article, we propose an approach combining the proximity of the terms and the tags which mark these terms. Our approach is based on a Fetch and Browse method where the fetch step is performed with BM25 and the browse step with a structure enhanced proximity model. In this way, the ranking of a document depends not only upon the existence of the query terms within the document but also upon the tags which mark these terms. Thus, the document tends to be highly relevant when query terms are close together and are emphasized by tags. The evaluation of this model on a large XML structured collection provided by the INEX 2010 XML IR evaluation campaign shows that the use of term proximity and structure improves the retrieval effectiveness of BM25 in the context of focused information retrieval

    Document image processing using irregular pyramid structure

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A large vocabulary online handwriting recognition system for Turkish

    Get PDF
    Handwriting recognition in general and online handwriting recognition in particular has been an active research area for several decades. Most of the research have been focused on English and recently on other scripts like Arabic and Chinese. There is a lack of research on recognition in Turkish text and this work primarily fills that gap with a state-of-the-art recognizer for the first time. It contains design and implementation details of a complete recognition system for recognition of Turkish isolated words. Based on the Hidden Markov Models, the system comprises pre-processing, feature extraction, optical modeling and language modeling modules. It considers the recognition of unconstrained handwriting with a limited vocabulary size first and then evolves to a large vocabulary system. Turkish script has many similarities with other Latin scripts, like English, which makes it possible to adapt strategies that work for them. However, there are some other issues which are particular to Turkish that should be taken into consideration separately. Two of the challenging issues in recognition of Turkish text are determined as delayed strokes which introduce an extra source of variation in the sequence order of the handwritten input and high Out-of-Vocabulary (OOV) rate of Turkish when words are used as vocabulary units in the decoding process. This work examines the problems and alternative solutions at depth and proposes suitable solutions for Turkish script particularly. In delayed stroke handling, first a clear definition of the delayed strokes is developed and then using that definition some alternative handling methods are evaluated extensively on the UNIPEN and Turkish datasets. The best results are obtained by removing all delayed strokes, with up to 2.13% and 2.03% points recognition accuracy increases, over the respective baselines of English and Turkish. The overall system performances are assessed as 86.1% with a 1,000-word lexicon and 83.0% with a 3,500-word lexicon on the UNIPEN dataset and 91.7% on the Turkish dataset. Alternative decoding vocabularies are designed with grammatical sub-lexical units in order to solve the problem of high OOV rate. Additionally, statistical bi-gram and tri-gram language models are applied during the decoding process. The best performance, 67.9% is obtained by the large stem-ending vocabulary that is expanded with a bi-gram model on the Turkish dataset. This result is superior to the accuracy of the word-based vocabulary (63.8%) with the same coverage of 95% on the BOUN Web Corpus

    Methoden der lexikalischen Nachkorrektur OCR-erfasster Dokumente

    Get PDF
    Das maschinelle Lesen, d. h. die Umwandlung gedruckter Dokumente via Pixelrepräsentation in eine Symbolfolgen, erfolgt mit heute verfügbaren, kommerziellen OCR-Engines für viele Dokumentklassen fast schon fehlerfrei. Trotzdem gilt für die meisten OCR-Anwendungen die Devise, je weniger Fehler, desto besser. Beispielsweise kann ein falsch erkannter Name innerhalb eines Geschäftsbriefes in einem automatisierten System zur Eingangsspostverteilung unnötige Kosten durch Fehlzuordnungen o.ä. verursachen. Eine lexikalische Nachkorrektur hilft, verbleibende Fehler von OCR-Engines aufzuspüren, zu korrigieren oder auch mit einer interaktiven Korrektur zu beseitigen. Neben einer Realisierung als nachgelagerte, externe Komponente, kann eine lexikalische Nachkorrektur auch direkt in eine OCR-Engine integriert werden. Meinen Beitrag zur lexikalischen Nachkorrektur habe ich in zehn Thesen untergliedert: These T1: Für eine Nachkorrektur von OCR-gelesenen Fachtexten können Lexika, die aus thematisch verwandten Web-Dokumenten stammen, gewinnbringend eingesetzt werden. These T2: Das Vokabular eines Fachtexts wird von großen Standardlexika unzureichend abgedeckt. Durch Textextraktion aus thematisch verwandten Web-Dokumenten lassen sich Lexika mit einer höheren Abdeckungsrate gewinnen. Zudem spiegeln die Frequenzinformationen aus diesen Web-Dokumenten die des Fachtexts besser wider als Frequenzinformationen aus Standardkorpora. These T3: Automatisierte Anfragen an Suchmaschinen bieten einen geeigneten Zugang zu den einschlägigen Web-Dokumenten eines Fachgebiets. These T4: Eine feingliedrige Fehlerklassifikation erlaubt die Lokalisierung der beiden Hauptfehlerquellen der webgestützten Nachkorrektur: • falsche Freunde, d. h. Fehler, die unentdeckt bleiben, da sie lexikalisch sind • unglückliche Korrekturen hin zu Orthographie- oder Flexions-Varianten These T5: Falsche Freunde werden durch eine Kombination mehrerer OCR-Engines deutlich vermindert. These T6: Mit einfachen Heuristiken wird ein unglücklicher Variantenaustausch der Nachkorrekturkomponente vermieden. These T7: Mit einer Vereinheitlichung zu Scores lassen sich diverse OCR-Nachkorrekturhilfen wie etwa Wort-Abstandsmaße, Frequenz- und Kontextinformationen kombinieren und zur Kandidaten- sowie Grenzbestimmung einsetzen. These T8: OCR-Nachkorrektur ist ein multidimensionales Parameteroptimierungsproblem, wie z. B. Auswahl der Scores, deren Kombination und Gewichtung, Grenzbestimmung oder Lexikonauswahl. Eine graphische Oberfläche eignet sich für eine Untersuchung der Parameter und deren Adjustierung auf Trainingsdaten. These T9: Die Software zur Parameteroptimierung der Nachkorrektur der Resultate einer OCR-Engine kann für die Kombination mehrerer OCR-Engines wiederverwendet werden, indem die Einzelresultate der Engines wieder zu Scores vereinheitlicht werden. These T10: Eine Wort-zu-Wort-Alignierung, wie sie für die Groundtruth-Erstellung und die Kombination von OCR-Engines notwendig ist, kann durch eine Verallgemeinerung des Levenshtein-Abstands auf Wortebene effizient realisiert werden

    Table recognition in mathematical documents

    Get PDF
    While a number of techniques have been developed for table recognition in ordinary text documents, when dealing with tables in mathematical documents these techniques are often ineffective as tables containing mathematical structures can differ quite significantly from ordinary text tables. In fact, it is even difficult to clearly distinguish table recognition in mathematics from layout analysis of mathematical formulas. Again, it is not straight forward to adapt general layout analysis techniques for mathematical formulas. However, a reliable understanding of formula layout is often a necessary prerequisite to further semantic interpretation of the represented formulae. In this thesis, we present the necessary preprocessing steps towards a table recognition technique that specialises on tables in mathematical documents. It is based on our novel robust line recognition technique for mathematical expressions, which is fully independent of understanding the content or specialist fonts of expressions. We also present a graph representation for complex mathematical table structures. A set of rewriting rules applied to the graph allows for reliable re-composition of cells in order to identify several valid table interpretations. We demonstrate the effectiveness of our technique by applying them to a set of mathematical tables from standard text book that has been manually ground-truthed

    Enhancing manufacturing operations with synthetic data: a systematic framework for data generation, accuracy, and utility

    Get PDF
    Addressing the challenges of data scarcity and privacy, synthetic data generation offers an innovative solution that advances manufacturing assembly operations and data analytics. Serving as a viable alternative, it enables manufacturers to leverage a broader and more diverse range of machine learning models by incorporating the creation of artificial data points for training and evaluation. Current methods lack generalizable framework for researchers to follow and solve these issues. The development of synthetic data sets, however, can make up for missing samples and enable researchers to understand existing issues within the manufacturing process and create data-driven tools for reducing manufacturing costs. This paper systematically reviews both discrete and continuous manufacturing process data types with their applicable synthetic generation techniques. The proposed framework entails four main stages: Data collection, pre-processing, synthetic data generation, and evaluation. To validate the framework’s efficacy, a case study leveraging synthetic data enabled an exploration of complex defect classification challenges in the packaging process. The results show enhanced prediction accuracy and provide a detailed comparative analysis of various synthetic data strategies. This paper concludes by highlighting our framework’s transformative potential for researchers, educators, and practitioners and provides scalable guidance to solve the data challenges in the current manufacturing sector

    Historical document analysis based on word matching

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 67-76.Historical documents constitute a heritage which should be preserved and providing automatic retrieval and indexing scheme for these archives would be beneficial for researchers from several disciplines and countries. Unfortunately, applying ordinary Optical Character Recognition (OCR) techniques on these documents is nearly impossible, since these documents are degraded and deformed. Recently, word matching methods are proposed to access these documents. In this thesis, two historical document analysis problems, word segmentation in historical documents and Islamic pattern matching in kufic images are tackled based on word matching. In the first task, a cross document word matching based approach is proposed to segment historical documents into words. A version of a document, in which word segmentation is easy, is used as a source data set and another version in a different writing style, which is more difficult to segment into words, is used as a target data set. The source data set is segmented into words by a simple method and extracted words are used as queries to be spotted in the target data set. Experiments on an Ottoman data set show that cross document word matching is a promising method to segment historical documents into words. In the second task, firstly lines are extracted and sub-patterns are automatically detected in the images. Then sub-patterns are matched based on a line representation in two ways: by their chain code representation and by their shape contexts. Promising results are obtained for finding the instances of a query pattern and for fully automatic detection of repeating patterns on a square kufic image collection.Arifoğlu, DamlaM.S
    corecore