274 research outputs found

    Direct Tensor Voting in Line Segmentation of Handwritten Documents

    Get PDF
    In the vast archives and libraries of the world, countless historical documents are tucked away, often difficult to access. Thankfully, the digitization process has made it easier to view these invaluable records. However, simply digitizing them is not enough – the real challenge lies in making them searchable and computer-readable. Many of these documents were handwritten, which means they need to undergo handwriting recognition. The first step in this process is to divide the document into lines. This article introduces a solution to this problem using tensorvoting. The algorithm starts by conducting voting on the binary image itself. Then, using the local maxima found in the resulting tensor field, the lines of text are precisely tracked and labeled. To ensure its effectiveness, the algorithm’s performance was tested on the data-set delivered by the organizers of the ICDAR 2009 competition and evaluated using the criteria from this contest

    Optical Character Recognition of Amharic Documents

    Full text link
    In Africa around 2,500 languages are spoken. Some of these languages have their own indigenous scripts. Accordingly, there is a bulk of printed documents available in libraries, information centers, museums and offices. Digitization of these documents enables to harness already available information technologies to local information needs and developments. This paper presents an Optical Character Recognition (OCR) system for converting digitized documents in local languages. An extensive literature survey reveals that this is the first attempt that report the challenges towards the recognition of indigenous African scripts and a possible solution for Amharic script. Research in the recognition of African indigenous scripts faces major challenges due to (i) the use of large number characters in the writing and (ii) existence of large set of visually similar characters. In this paper, we propose a novel feature extraction scheme using principal component and linear discriminant analysis, followed by a decision directed acyclic graph based support vector machine classifier. Recognition results are presented on real-life degraded documents such as books, magazines and newspapers to demonstrate the performance of the recognizer

    Text Recognition in Multimedia Documents: A Study of two Neural-based OCRs Using and Avoiding Character Segmentation

    Get PDF
    International audienceText embedded in multimedia documents represents an important semantic information that helps to automatically access the content. This paper proposes two neural-based OCRs that handle the text recognition problem in different ways. The first approach segments a text image into individual characters before recognizing them, while the second one avoids the segmentation step by integrating a multi-scale scanning scheme that allows to jointly localize and recognize characters at each position and scale. Some linguistic knowledge is also incorporated into the proposed schemes to remove errors due to recognition confusions. Both OCR systems are applied to caption texts embedded in videos and in natural scene images and provide outstanding results showing that the proposed approaches outperform the state-of-the-art methods

    Accuracy Affecting Factors for Optical Handwritten Character Recognition

    Get PDF
    Optiline kirjatuvastus viitab tehnikale, mis konverteerib trükitud, kirjutatud või prinditud teksi masinkodeeritud tekstiks, võimaldades sellega paberdokumentide nagu passide, arvete, meditsiiniliste vormide või tšekkide automaatset töötlemist. Mustrituvastus, tehisintellekt ja arvuti nägemine on kõik teadusharud, mis võimaldavad optilist kirjatuvastust. Optilise kirjatuvastuse kasutus võimaldaks paljudel kasvavatel informatsiooni süsteemidel mugavat üleminekut paberformaadilt digitaalsele. Tänapäeval on optilisest kirjatuvastusest väljaskasvanud mitme sammuline protsess: segmenteerimine, andmete eeltöötlus, iseloomulike tunnuste tuletamine, klassifitseerimine, andmete järeltöötlus ja rakenduse spetsiifiline optimiseerimine. See lõputöö pakub välja tehnikaid, millega üleüldiselt tõsta optiliste kirjatuvastussüsteemide täpsust, näidates eeltöötluse, iseloomulike tunnuste tuletamise ja morfoloogilise töötluse mõju. Lisaks võrreldakse erinevate enimkasutatud klassifitseerijate tulemusi. Kasutades selles töös mainitud meetodeid saavutati täpsus üle 98% ja koguti märkimisväärselt suur andmebaas käsitsi kirjutatud jaapani keele hiragana tähestiku tähti.Optical character recognition (OCR) refers to a technique that converts images of typed, handwritten or printed text into machine-encoded text enabling automatic processing paper records such as passports, invoices, medical forms, receipts, etc. Pattern recognition, artificial intelligence and computer vision are all research fields that enable OCR. Using OCR on handwritten text could greatly benefit many of the emerging information systems by ensuring smooth transition from paper format to digital world. Nowadays, OCR has evolved into a multi-step process: segmentation, pre-processing, feature extraction, classification, post-processing and application-specific optimization. This thesis proposes techniques to improve the overall accuracy of the OCR systems by showing the affects of pre-processing, feature extraction and morphological processing. It also compares accuracies of different well-known and commonly used classifiers in the field. Using the proposed techniques an accuracy of over 98% was achieved. Also a dataset of handwritten Japanese Hiragana characters with a considerable variability was collected as a part of this thesis

    EFFICIENT IMAGE COMPRESSION AND DECOMPRESSION ALGORITHMS FOR OCR SYSTEMS

    Get PDF
    This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the “Nikola Tesla Museum” in Belgrade. Proposed image compression methods exploit the Run-Length Encoding (RLE) algorithm and an algorithm based on document character contour extraction, while an iterative scanline fill algorithm is used for image decompression. Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards. Segmentation accuracy results for ground-truth documents are obtained in order to evaluate the proposed methods. Results show that the proposed methods outperform JBIG2 compression regarding the time complexity, providing up to 25 times lower processing time at the expense of worse compression ratio results, as well as JPEG2000 image compression standard, providing up to 4-fold improvement in compression ratio. Finally, time complexity results show that the presented methods are sufficiently fast for a real time character segmentation system

    Document image analysis and recognition: a survey

    Get PDF
    This paper analyzes the problems of document image recognition and the existing solutions. Document recognition algorithms have been studied for quite a long time, but despite this, currently, the topic is relevant and research continues, as evidenced by a large number of associated publications and reviews. However, most of these works and reviews are devoted to individual recognition tasks. In this review, the entire set of methods, approaches, and algorithms necessary for document recognition is considered. A preliminary systematization allowed us to distinguish groups of methods for extracting information from documents of different types: single-page and multi-page, with text and handwritten contents, with a fixed template and flexible structure, and digitalized via different ways: scanning, photographing, video recording. Here, we consider methods of document recognition and analysis applied to a wide range of tasks: identification and verification of identity, due diligence, machine learning algorithms, questionnaires, and audits. The groups of methods necessary for the recognition of a single page image are examined: the classical computer vision algorithms, i.e., keypoints, local feature descriptors, Fast Hough Transforms, image binarization, and modern neural network models for document boundary detection, document classification, document structure analysis, i.e., text blocks and tables localization, extraction and recognition of the details, post-processing of recognition results. The review provides a description of publicly available experimental data packages for training and testing recognition algorithms. Methods for optimizing the performance of document image analysis and recognition methods are described.The reported study was funded by RFBR, project number 20-17-50177. The authors thank Sc. D. Vladimir L. Arlazarov (FRC CSC RAS), Pavel Bezmaternykh (FRC CSC RAS), Elena Limonova (FRC CSC RAS), Ph. D. Dmitry Polevoy (FRC CSC RAS), Daniil Tropin (LLC “Smart Engines Service”), Yuliya Chernysheva (LLC “Smart Engines Service”), Yuliya Shemyakina (LLC “Smart Engines Service”) for valuable comments and suggestions

    Predicting semantic labels of text regions in heterogeneous document images

    Get PDF
    Contains fulltext : 214639.pdf (publisher's version ) (Open Access)KONVENS 2019: 15th Conference on Natural Language Processing, Erlangen, Germany, October 9-11, 201
    corecore