7 research outputs found

    Kannada Character Recognition System A Review

    Full text link
    Intensive research has been done on optical character recognition ocr and a large number of articles have been published on this topic during the last few decades. Many commercial OCR systems are now available in the market, but most of these systems work for Roman, Chinese, Japanese and Arabic characters. There are no sufficient number of works on Indian language character recognition especially Kannada script among 12 major scripts in India. This paper presents a review of existing work on printed Kannada script and their results. The characteristics of Kannada script and Kannada Character Recognition System kcr are discussed in detail. Finally fusion at the classifier level is proposed to increase the recognition accuracy.Comment: 12 pages, 8 figure

    Finding Similarities between Structured Documents as a Crucial Stage for Generic Structured Document Classifier

    Get PDF
    One of the addressed problems of classifying structured documents is the definition of a similarity measure that is applicable in real situations, where query documents are allowed to differ from the database templates. Furthermore, this approach might have rotated [1], noise corrupted [2], or manually edited form and documents as test sets using different schemes, making direct comparison crucial issue [3]. Another problem is huge amount of forms could be written in different languages, for example here in Malaysia forms could be written in Malay, Chinese, English, etc languages. In that case text recognition (like OCR) could not be applied in order to classify the requested documents taking into consideration that OCR is considered more easier and accurate rather than the layout  detection. Keywords: Feature Extraction, Document processing, Document Classification

    Joint Layout Analysis, Character Detection and Recognition for Historical Document Digitization

    Full text link
    In this paper, we propose an end-to-end trainable framework for restoring historical documents content that follows the correct reading order. In this framework, two branches named character branch and layout branch are added behind the feature extraction network. The character branch localizes individual characters in a document image and recognizes them simultaneously. Then we adopt a post-processing method to group them into text lines. The layout branch based on fully convolutional network outputs a binary mask. We then use Hough transform for line detection on the binary mask and combine character results with the layout information to restore document content. These two branches can be trained in parallel and are easy to train. Furthermore, we propose a re-score mechanism to minimize recognition error. Experiment results on the extended Chinese historical document MTHv2 dataset demonstrate the effectiveness of the proposed framework.Comment: 6 pages, 6 figure

    Automatic Extraction of Attributes from Printed Indian Cheque Images by Template Matching Technique

    Get PDF
    Reserve Bank of India (RBI) has introduced Cheque Truncation System (CTS) for Indian banks in order to reduce the time required for physical movement of cheques between the clearance departments. However, other processes including database entry and verification are carried out manually. The proposal here is to eliminate the manual intervention by extracting the attributes from the input cheque image and updating the database automatically which significantly would reduce the time lapse on filling up the data into the database. Automatic database updating also contributes to provide secure data retrieval through querying system for verification of attributes by concerned banks. In this paper, a novel approach to extract printed attributes from Indian Bank cheque images based on their template structures is proposed. Template structures are determined by extracting the MICR code from the input cheque image. Important attributes region is segmented, and the printed data is recognized. Extensive experiments demonstrate the efficacy of the proposed method

    Skew Correction For Mushaf Al-Quran: A Review

    Get PDF
    Skew correction has been studied a lot recently. However, the content of skew correction in these studies is considered less for Arabic scripts compared to other languages. Different scripts of Arabic language are used by people. Mushaf A-Quran is the book of Allah swt and used by many people around the world. Therefore, skew correction of the pages in Mushaf Al-Quran need to be studied carefully. However, during the process of scanning the pages of Mushaf Al-Quran and due to some other factors, skewed images are produced which will affect the holiness of the Mushaf AlQuran. However, a major difficulty is the process of detecting the skew and correcting it within the page. Therefore, this paper aims to view the most used skew correction techniques for different scripts as cited in the literature. The findings can be used as a basis for researchers who are interested in image processing, image analysis, and computer visio

    Detecção de Inclinação em Imagens de Documentos

    Get PDF
    A digitalização de documentos contribui para a preservação da informação evitando sua perda devido à degradação física do papel. Atualmente, Sistemas de Reconhecimento Automático de Imagens de Documentos são empregados para converter, automaticamente, a informação contida nas imagens em texto editável, de forma rápida e sem a necessidade da presença de um indivíduo. Assim, tornando essa informação pesquisável através, por exemplo, de palavras-chave.A inclinação em documentos é um problema freqüente nesses sistemas e, em geral, é  imposta durante a digitalização, quando o papel é posicionado com um ângulo diferente de zero grau sobre o eixo do scanner. No caso de documentos manuscritos, a inclinação pode surgir durante a escrita do próprio documento, principalmente quando o escritor não tem uma linha de pauta como guia. A correção da inclinação é essencial para o bom desempenho de sistemas de reconhecimento automático.Este trabalho aborda o problema da detecção de inclinação em documentos impressos e manuscritos, trazendo uma revisão dos principais métodos para detecção de inclinação divulgados na literatura até os dias atuais. As principais técnicas são expostas de forma categorizada e vantagens e limitações de cada método são discutidas

    Vision Based Extraction of Nutrition Information from Skewed Nutrition Labels

    Get PDF
    An important component of a healthy diet is the comprehension and retention of nutritional information and understanding of how different food items and nutritional constituents affect our bodies. In the U.S. and many other countries, nutritional information is primarily conveyed to consumers through nutrition labels (NLs) which can be found in all packaged food products. However, sometimes it becomes really challenging to utilize all this information available in these NLs even for consumers who are health conscious as they might not be familiar with nutritional terms or find it difficult to integrate nutritional data collection into their daily activities due to lack of time, motivation, or training. So it is essential to automate this data collection and interpretation process by integrating Computer Vision based algorithms to extract nutritional information from NLs because it improves the user’s ability to engage in continuous nutritional data collection and analysis. To make nutritional data collection more manageable and enjoyable for the users, we present a Proactive NUTrition Management System (PNUTS). PNUTS seeks to shift current research and clinical practices in nutrition management toward persuasion, automated nutritional information processing, and context-sensitive nutrition decision support. PNUTS consists of two modules, firstly a barcode scanning module which runs on smart phones and is capable of vision-based localization of One Dimensional (1D) Universal Product Code (UPC) and International Article Number (EAN) barcodes with relaxed pitch, roll, and yaw camera alignment constraints. The algorithm localizes barcodes in images by computing Dominant Orientations of Gradients (DOGs) of image segments and grouping smaller segments with similar DOGs into larger connected components. Connected components that pass given morphological criteria are marked as potential barcodes. The algorithm is implemented in a distributed, cloud-based system. The system’s front end is a smartphone application that runs on Android smartphones with Android 4.2 or higher. The system’s back end is deployed on a five node Linux cluster where images are processed. The algorithm was evaluated on a corpus of 7,545 images extracted from 506 videos of bags, bottles, boxes, and cans in a supermarket. The DOG algorithm was coupled to our in-place scanner for 1D UPC and EAN barcodes. The scanner receives from the DOG algorithm the rectangular planar dimensions of a connected component and the component’s dominant gradient orientation angle referred to as the skew angle. The scanner draws several scan lines at that skew angle within the component to recognize the barcode in place without any rotations. The scanner coupled to the localizer was tested on the same corpus of 7,545 images. Laboratory experiments indicate that the system can localize and scan barcodes of any orientation in the yaw plane, of up to 73.28 degrees in the pitch plane, and of up to 55.5 degrees in the roll plane. The videos have been made public for all interested research communities to replicate our findings or to use them in their own research. The front end Android application is available for free download at Google Play under the title of NutriGlass. This module is also coupled to a comprehensive NL database from which nutritional information can be retrieved on demand. Currently our NL database consists of more than 230,000 products. The second module of PNUTS is an algorithm whose objective is to determine the text skew angle of an NL image without constraining the angle’s magnitude. The horizontal, vertical, and diagonal matrices of the (Two Dimensional) 2D Haar Wavelet Transform are used to identify 2D points with significant intensity changes. The set of points is bounded with a minimum area rectangle whose rotation angle is the text’s skew. The algorithm’s performance is compared with the performance of five text skew detection algorithms on 1001 U.S. nutrition label images and 2200 single- and multi-column document images in multiple languages. To ensure the reproducibility of the reported results, the source code of the algorithm and the image data have been made publicly available. If the skew angle is estimated correctly, optical character recognition (OCR) techniques can be used to extract nutrition information
    corecore