10 research outputs found

    Design and Implementation Recognition System for Handwritten Hindi/Marathi Document

    Get PDF
    In the present scenario most of the importance is given for the “paperless office” there by more and more communication and storage of documents is performed digitally. Documents and files which are present in Hindi and Marathi languages that were once stored physically on paper are now being converted into electronic form in order to facilitate quicker additions, searches, and modifications, as well as to prolong the life of such records. Because of this, there is a great demand of such software, which automatically extracts, analyze, recognize and store information from physical documents for later retrieval. Skew detection is used for text line position determination in Digitized documents, automated page orientation, and skew angle detection for binary document images, skew detection in handwritten scripts, in compensation for Internet audio applications and in the correction of scanned documents

    An Unsupervised Classification Technique for Detection of Flipped Orientations in Document Images

    Get PDF
    Detection of text orientation in document images is of preliminary concern prior to processing of documents by Optical Character Reader. The text direction in document images should exist generally in a specific orientation, i.e.,   text direction for any automated document reading system. The flipped text orientation leads to an unambiguous result in such fully automated systems. In this paper, we focus on development of text orientation direction detection module which can be incorporated as the perquisite process in automatic reading system. Orientation direction detection of text is performed through employing directional gradient features of document image and adapts an unsupervised learning approach for detection of flipped text orientation at which the document has been originally fed into scanning device. The unsupervised learning is built on the directional gradient features of text of document based on four possible different orientations. The algorithm is experimented on document samples of printed plain English text as well as filled in pre-printed forms of Telugu script. The outcome attained by algorithm proves to be consistent and adequate with an average accuracy around 94%

    User-driven Page Layout Analysis of historical printed Books

    Get PDF
    International audienceIn this paper, based on the study of the specificity of historical printed books, we first explain the main error sources in classical methods used for page layout analysis. We show that each method (bottom-up and top-down) provides different types of useful information that should not be ignored, if we want to obtain both a generic method and good segmentation results. Next, we propose to use a hybrid segmentation algorithm that builds two maps: a shape map that focuses on connected components and a background map, which provides information about white areas corresponding to block separations in the page. Using this first segmentation, a classification of the extracted blocks can be achieved according to scenarios produced by the user. These scenarios are defined very simply during an interactive stage. The user is able to make processing sequences adapted to the different kinds of images he is likely to meet and according to the user needs. The proposed “user-driven approach” is capable of doing segmentation and labelling of the required user high level concepts efficiently and has achieved above 93% accurate results over different data sets tested. User feedbacks and experimental results demonstrate the effectiveness and usability of our framework mainly because the extraction rules can be defined without difficulty and parameters are not sensitive to page layout variation

    Detecção de Inclinação em Imagens de Documentos

    Get PDF
    A digitalização de documentos contribui para a preservação da informação evitando sua perda devido à degradação física do papel. Atualmente, Sistemas de Reconhecimento Automático de Imagens de Documentos são empregados para converter, automaticamente, a informação contida nas imagens em texto editável, de forma rápida e sem a necessidade da presença de um indivíduo. Assim, tornando essa informação pesquisável através, por exemplo, de palavras-chave.A inclinação em documentos é um problema freqüente nesses sistemas e, em geral, é  imposta durante a digitalização, quando o papel é posicionado com um ângulo diferente de zero grau sobre o eixo do scanner. No caso de documentos manuscritos, a inclinação pode surgir durante a escrita do próprio documento, principalmente quando o escritor não tem uma linha de pauta como guia. A correção da inclinação é essencial para o bom desempenho de sistemas de reconhecimento automático.Este trabalho aborda o problema da detecção de inclinação em documentos impressos e manuscritos, trazendo uma revisão dos principais métodos para detecção de inclinação divulgados na literatura até os dias atuais. As principais técnicas são expostas de forma categorizada e vantagens e limitações de cada método são discutidas

    Approches géométriques pour la détection de l'angle d'orientation d'un texte

    Get PDF

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%

    Article Segmentation in Digitised Newspapers

    Get PDF
    Digitisation projects preserve and make available vast quantities of historical text. Among these, newspapers are an invaluable resource for the study of human culture and history. Article segmentation identifies each region in a digitised newspaper page that contains an article. Digital humanities, information retrieval (IR), and natural language processing (NLP) applications over digitised archives improve access to text and allow automatic information extraction. The lack of article segmentation impedes these applications. We contribute a thorough review of the existing approaches to article segmentation. Our analysis reveals divergent interpretations of the task, and inconsistent and often ambiguously defined evaluation metrics, making comparisons between systems challenging. We solve these issues by contributing a detailed task definition that examines the nuances and intricacies of article segmentation that are not immediately apparent. We provide practical guidelines on handling borderline cases and devise a new evaluation framework that allows insightful comparison of existing and future approaches. Our review also reveals that the lack of large datasets hinders meaningful evaluation and limits machine learning approaches. We solve these problems by contributing a distant supervision method for generating large datasets for article segmentation. We manually annotate a portion of our dataset and show that our method produces article segmentations over characters nearly as well as costly human annotators. We reimplement the seminal textual approach to article segmentation (Aiello and Pegoretti, 2006) and show that it does not generalise well when evaluated on a large dataset. We contribute a framework for textual article segmentation that divides the task into two distinct phases: block representation and clustering. We propose several techniques for block representation and contribute a novel highly-compressed semantic representation called similarity embeddings. We evaluate and compare different clustering techniques, and innovatively apply label propagation (Zhu and Ghahramani, 2002) to spread headline labels to similar blocks. Our similarity embeddings and label propagation approach substantially outperforms Aiello and Pegoretti but still falls short of human performance. Exploring visual approaches to article segmentation, we reimplement and analyse the state-of-the-art Bansal et al. (2014) approach. We contribute an innovative 2D Markov model approach that captures reading order dependencies and reduces the structured labelling problem to a Markov chain that we decode with Viterbi (1967). Our approach substantially outperforms Bansal et al., achieves accuracy as good as human annotators, and establishes a new state of the art in article segmentation. Our task definition, evaluation framework, and distant supervision dataset will encourage progress in the task of article segmentation. Our state-of-the-art textual and visual approaches will allow sophisticated IR and NLP applications over digitised newspaper archives, supporting research in the digital humanities

    Fuzzy machine vision based inspection

    Get PDF
    Machine vision system has been fostered to solve many realistic problems in various fields. Its role in achieving superior quality and productivity is of paramount importance. But, for such system to be attractive, it needs to be fast, accurate and cost-effective. This dissertation is based on a number of practical machine vision based inspection projects obtained from the automotive industry. It presents a collection of developed efficient fuzzy machine vision approaches endorsed with experimental results. It also covers the conceptual design, development and testing of various fuzzy machine vision based inspection approaches for different industrial applications. To assist in developing and evaluating the performance of the proposed approaches, several parts are tested under varying lighting conditions. This research deals with two important aspects of machine vision based inspection. In the first part, it concentrates on the topics of component detection and component orientation identification. The components used in this part are metal clips mounted on a dash panel frame that is installed in the door of trucks. Therefore, we propose a fuzzy machine vision based clip detection model and a fuzzy machine vision based clip orientation identification model to inspect the proper placement of clips on dash panels. Both models are efficient and fast in terms of accuracy and processing time. In the second part of the research, we are dealing with machined part defects such as broken edge, porosity and tool marks. The se defects occur on the surface of die cast aluminum automotive pump housings. As a result, an automated fuzzy machine vision based broken edge detection method, an efficient fuzzy machine vision based porosity detection technique and a neuro-fuzzy part classification model based on tool marks are developed. Computational results show that the proposed approaches are effective in yielding satisfactory results to the tested image databases. There are four main contributions to this work. The first contribution is the development of the concept of composite matrices in conjunction with XOR feature extractor using fuzzy subtractive clustering for clip detection. The second contribution is about a proposed model based on grouping and counting pixels in pre-selective areas which tracks pixel colors in separated RGB channels to determine whether the orientation of the clip is acceptable or not. The construction of three novel edge based features embedded in fuzzy C-means clustering for broken edge detection marks the third contribution. At last, the fourth contribution presents the core of porosity candidates concept and its correlation with twelve developed matrices. This, in turn, results in the development of five different features used in our fuzzy machine vision based porosity detection approach

    Contributions au tri automatique de documents et de courrier d'entreprises

    Get PDF
    Ce travail de thèse s inscrit dans le cadre du développement de systèmes de vision industrielle pour le tri automatique de documents et de courriers d entreprises. Les architectures existantes, dont nous avons balayé les spécificités dans les trois premiers chapitres de la thèse, présentent des faiblesses qui se traduisent par des erreurs de lecture et des rejets que l on impute encore trop souvent aux OCR. Or, les étapes responsables de ces rejets et de ces erreurs de lecture sont les premières à intervenir dans le processus. Nous avons ainsi choisi de porter notre contribution sur les aspects inhérents à la segmentation des images de courriers et la localisation de leurs régions d intérêt en investissant une nouvelle approche pyramidale de modélisation par coloration hiérarchique de graphes ; à ce jour, la coloration de graphes n a jamais été exploitée dans un tel contexte. Elle intervient dans notre contribution à toutes les étapes d analyse de la structure des documents ainsi que dans la prise de décision pour la reconnaissance (reconnaissance de la nature du document à traiter et reconnaissance du bloc adresse). Notre architecture a été conçue pour réaliser essentiellement les étapes d analyse de structures et de reconnaissance en garantissant une réelle coopération entres les différents modules d analyse et de décision. Elle s articule autour de trois grandes parties : une partie de segmentation bas niveau (binarisation et recherche de connexités), une partie d extraction de la structure physique par coloration hiérarchique de graphe et une partie de localisation de blocs adresse et de classification de documents. Les algorithmes impliqués dans le système ont été conçus pour leur rapidité d exécution (en adéquation avec les contraintes de temps réels), leur robustesse, et leur compatibilité. Les expérimentations réalisées dans ce contexte sont très encourageantes et offrent également de nouvelles perspectives à une plus grande diversité d images de documents.This thesis deals with the development of industrial vision systems for automatic business documents and mail sorting. These systems need very high processing time, accuracy and precision of results. The current systems are most of time made of sequential modules needing fast and efficient algorithms throughout the processing line: from low to high level stages of analysis and content recognition. The existing architectures that we have described in the three first chapters of the thesis have shown their weaknesses that are expressed by reading errors and OCR rejections. The modules that are responsible of these rejections and reading errors are mostly the first to occur in the processes of image segmentation and interest regions location. Indeed, theses two processes, involving each other, are fundamental for the system performances and the efficiency of the automatic sorting lines. In this thesis, we have chosen to focus on different sides of mail images segmentation and of relevant zones (as address block) location. We have chosen to develop a model based on a new pyramidal approach using a hierarchical graph coloring. As for now, graph coloring has never been exploited in such context. It has been introduced in our contribution at every stage of document layout analysis for the recognition and decision tasks (kind of document or address block recognition). The recognition stage is made about a training process with a unique model of graph b-coloring. Our architecture is basically designed to guarantee a good cooperation bewtween the different modules of decision and analysis for the layout analysis and the recognition stages. It is composed of three main sections: the low-level segmentation (binarisation and connected component labeling), the physical layout extraction by hierarchical graph coloring and the address block location and document sorting. The algorithms involved in the system have been designed for their execution speed (matching with real time constraints), their robustness, and their compatibility. The experimentations made in this context are very encouraging and lead to investigate a wider diversity of document images.VILLEURBANNE-DOC'INSA-Bib. elec. (692669901) / SudocSudocFranceF

    Évaluation de la qualité des documents anciens numérisés

    Get PDF
    Les travaux de recherche présentés dans ce manuscrit décrivent plusieurs apports au thème de l évaluation de la qualité d images de documents numérisés. Pour cela nous proposons de nouveaux descripteurs permettant de quantifier les dégradations les plus couramment rencontrées sur les images de documents numérisés. Nous proposons également une méthodologie s appuyant sur le calcul de ces descripteurs et permettant de prédire les performances d algorithmes de traitement et d analyse d images de documents. Les descripteurs sont définis en analysant l influence des dégradations sur les performances de différents algorithmes, puis utilisés pour créer des modèles de prédiction à l aide de régresseurs statistiques. La pertinence, des descripteurs proposés et de la méthodologie de prédiction, est validée de plusieurs façons. Premièrement, par la prédiction des performances de onze algorithmes de binarisation. Deuxièmement par la création d un processus automatique de sélection de l algorithme de binarisation le plus performant pour chaque image. Puis pour finir, par la prédiction des performances de deux OCRs en fonction de l importance du défaut de transparence (diffusion de l encre du recto sur le verso d un document). Ce travail sur la prédiction des performances d algorithmes est aussi l occasion d aborder les problèmes scientifiques liés à la création de vérités-terrains et d évaluation de performances.This PhD. thesis deals with quality evaluation of digitized document images. In order to measure the quality of a document image, we propose to create new features dedicated to the characterization of most commons degradations. We also propose to use these features to create prediction models able to predict the performances of different types of document analysis algorithms. The features are defined by analyzing the impact of a specific degradation on the results of an algorithm and then used to create statistical regressors.The relevance of the proposed features and predictions models, is analyzed in several experimentations. The first one aims to predict the performance of different binarization methods. The second experiment aims to create an automatic procedure able to select the best binarization method for each image. At last, the third experiment aims to create a prediction model for two commonly used OCRs. This work on performance prediction algorithms is also an opportunity to discuss the scientific problems of creating ground-truth for performance evaluation.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF
    corecore