28 research outputs found

    EFFICIENT IMAGE COMPRESSION AND DECOMPRESSION ALGORITHMS FOR OCR SYSTEMS

    Get PDF
    This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the “Nikola Tesla Museum” in Belgrade. Proposed image compression methods exploit the Run-Length Encoding (RLE) algorithm and an algorithm based on document character contour extraction, while an iterative scanline fill algorithm is used for image decompression. Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards. Segmentation accuracy results for ground-truth documents are obtained in order to evaluate the proposed methods. Results show that the proposed methods outperform JBIG2 compression regarding the time complexity, providing up to 25 times lower processing time at the expense of worse compression ratio results, as well as JPEG2000 image compression standard, providing up to 4-fold improvement in compression ratio. Finally, time complexity results show that the presented methods are sufficiently fast for a real time character segmentation system

    Document image analysis and recognition: a survey

    Get PDF
    This paper analyzes the problems of document image recognition and the existing solutions. Document recognition algorithms have been studied for quite a long time, but despite this, currently, the topic is relevant and research continues, as evidenced by a large number of associated publications and reviews. However, most of these works and reviews are devoted to individual recognition tasks. In this review, the entire set of methods, approaches, and algorithms necessary for document recognition is considered. A preliminary systematization allowed us to distinguish groups of methods for extracting information from documents of different types: single-page and multi-page, with text and handwritten contents, with a fixed template and flexible structure, and digitalized via different ways: scanning, photographing, video recording. Here, we consider methods of document recognition and analysis applied to a wide range of tasks: identification and verification of identity, due diligence, machine learning algorithms, questionnaires, and audits. The groups of methods necessary for the recognition of a single page image are examined: the classical computer vision algorithms, i.e., keypoints, local feature descriptors, Fast Hough Transforms, image binarization, and modern neural network models for document boundary detection, document classification, document structure analysis, i.e., text blocks and tables localization, extraction and recognition of the details, post-processing of recognition results. The review provides a description of publicly available experimental data packages for training and testing recognition algorithms. Methods for optimizing the performance of document image analysis and recognition methods are described.The reported study was funded by RFBR, project number 20-17-50177. The authors thank Sc. D. Vladimir L. Arlazarov (FRC CSC RAS), Pavel Bezmaternykh (FRC CSC RAS), Elena Limonova (FRC CSC RAS), Ph. D. Dmitry Polevoy (FRC CSC RAS), Daniil Tropin (LLC “Smart Engines Service”), Yuliya Chernysheva (LLC “Smart Engines Service”), Yuliya Shemyakina (LLC “Smart Engines Service”) for valuable comments and suggestions

    A framework for ancient and machine-printed manuscripts categorization

    Get PDF
    Document image understanding (DIU) has attracted a lot of attention and became an of active fields of research. Although, the ultimate goal of DIU is extracting textual information of a document image, many steps are involved in a such a process such as categorization, segmentation and layout analysis. All of these steps are needed in order to obtain an accurate result from character recognition or word recognition of a document image. One of the important steps in DIU is document image categorization (DIC) that is needed in many situations such as document image written or printed in more than one script, font or language. This step provides useful information for recognition system and helps in reducing its error by allowing to incorporate a category-specific Optical Character Recognition (OCR) system or word recognition (WR) system. This research focuses on the problem of DIC in different categories of scripts, styles and languages and establishes a framework for flexible representation and feature extraction that can be adapted to many DIC problem. The current methods for DIC have many limitations and drawbacks that restrict the practical usage of these methods. We proposed an efficient framework for categorization of document image based on patch representation and Non-negative Matrix Factorization (NMF). This framework is flexible and can be adapted to different categorization problem. Many methods exist for script identification of document image but few of them addressed the problem in handwritten manuscripts and they have many limitations and drawbacks. Therefore, our first goal is to introduce a novel method for script identification of ancient manuscripts. The proposed method is based on patch representation in which the patches are extracted using skeleton map of a document images. This representation overcomes the limitation of the current methods about the fixed level of layout. The proposed feature extraction scheme based on Projective Non-negative Matrix Factorization (PNMF) is robust against noise and handwriting variation and can be used for different scripts. The proposed method has higher performance compared to state of the art methods and can be applied to different levels of layout. The current methods for font (style) identification are mostly proposed to be applied on machine-printed document image and many of them can only be used for a specific level of layout. Therefore, we proposed new method for font and style identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The images are represented by overlapping patches obtained from the foreground pixels. The position of these patches are set based on skeleton map to reduce the number of patches. Non-Negative Matrix Tri-Factorization is used to learn bases from each fonts (style) and then these bases are used to classify a new image based on minimum representation error. The proposed method can easily be extended to new fonts as the bases for each font are learned separately from the other fonts. This method is tested on two datasets of machine-printed and ancient manuscript and the results confirmed its performance compared to the state of the art methods. Finally, we proposed a novel method for language identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The current methods for language identification are based on textual data obtained by OCR engine or images data through coding and comparing with textual data. The OCR based method needs lots of processing and the current image based method are not applicable to cursive scripts such as Arabic. In this work we introduced a new method for language identification of machine-printed and handwritten manuscripts based on patch representation and NMTF. The patch representation provides the component of the Arabic script (letters) that can not be extracted simply by segmentation methods. Then NMTF is used for dictionary learning and generating codebooks that will be used to represent document image with a histogram. The proposed method is tested on two datasets of machine-printed and handwritten manuscripts and compared to n-gram features (text-based), texture features and codebook features (imagebased) to validate the performance. The above proposed methods are robust against variation in handwritings, changes in the font (handwriting style) and presence of degradation and are flexible that can be used to various levels of layout (from a textline to paragraph). The methods in this research have been tested on datasets of handwritten and machine-printed manuscripts and compared to state-of-the-art methods. All of the evaluations show the efficiency, robustness and flexibility of the proposed methods for categorization of document image. As mentioned before the proposed strategies provide a framework for efficient and flexible representation and feature extraction for document image categorization. This frame work can be applied to different levels of layout, the information from different levels of layout can be merged and mixed and this framework can be extended to more complex situations and different tasks

    무인 자율주행 차량을 위한 단안 카메라 기반 실시간 주행 환경 인식 기법에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기공학부, 2014. 2. 서승우.Homo Faber, refers to humans as controlling the environments through tools. From the beginning of the world, humans create tools for chasing the convenient life. The desire for the rapid movement let the human ride on horseback, make the wagon and finally make the vehicle. The vehicle made humans possible to travel the long distance very quickly as well as conveniently. However, since human being itself is imperfect, plenty of people have died due to the car accident, and people are dying at this moment. The research for autonomous vehicle has been conducted to satisfy the humans desire of the safety as the best alternative. And, the dream of autonomous vehicle will be come true in the near future. For the implementation of autonomous vehicle, many kinds of techniques are required, among which, the recognition of the environment around the vehicle is one of the most fundamental and important problems. For the recognition of surrounding objects many kinds of sensors can be utilized, however, the monocular camera can collect the largest information among sensors as well as can be utilized for the variety of purposes, and can be adopted for the various vehicle types due to the good price competitiveness. I expect that the research using the monocular camera for autonomous vehicle is very practical and useful. In this dissertation, I cover four important recognition problems for autonomous driving by using monocular camera in vehicular environment. Firstly, to drive the way autonomously the vehicle has to recognize lanes and keep its lane. However, the detection of lane markings under the various illuminant variation is very difficult in the image processing area. Nevertheless, it must be solved for the autonomous driving. The first research topic is the robust lane marking extraction under the illumination variations for multilane detection. I proposed the new lane marking extraction filter that can detect the imperfect lane markings as well as the new false positive cancelling algorithm that can eliminate noise markings. This approach can extract lane markings successfully even under the bad illumination conditions. Secondly, the problem to tackle, is if there is no lane marking on the road, then how the autonomous vehicle can recognize the road to run? In addition, what is the current lane position of the road? The latter is the important question since we can make a decision for lane change or keeping depending on the current position of lane. The second research is for handling those two problems, and I proposed the approach for the fusing the road detection and the lane position estimation. Thirdly, to drive more safely, keeping the safety distance is very important. Additionally, many equipments for the driving safety require the distance information. Measuring accurate inter-vehicle distance by using monocular camera and line laser is the third research topic. To measure the inter-vehicle distance, I illuminate the line laser on the front side of vehicle, and measure the length of the laser line and lane width in the image. Based on the imaging geometry, the distance calculation problem can be solved with accuracy. There are still many of important problems are remaining to be solved, and I proposed some approaches by using the monocular camera to handle the important problems. I expect very active researches will be continuously conducted and, based on the researches, the era of autonomous vehicle will come in the near future.1 Introduction 1.1 Background and Motivations 1.2 Contributions and Outline of the Dissertation 1.2.1 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 1.2.2 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 1.2.3 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 2 Illumination-Tolerant Lane Marking Extraction for Multilane Detection 2.1 Introduction 2.2 Lane Marking Candidate Extraction Filter 2.2.1 Requirements of the Filter 2.2.2 A Comparison of Filter Characteristics 2.2.3 Cone Hat Filter 2.3 Overview of the Proposed Algorithm 2.3.1 Filter Width Estimation 2.3.2 Top Hat (Cone Hat) Filtering 2.3.3 Reiterated Extraction 2.3.4 False Positive Cancelling 2.3.4.1 Lane Marking Center Point Extraction 2.3.4.2 Fast Center Point Segmentation 2.3.4.3 Vanishing Point Detection 2.3.4.4 Segment Extraction 2.3.4.5 False Positive Filtering 2.4 Experiments and Evaluation 2.4.1 Experimental Set-up 2.4.2 Conventional Algorithm for Evaluation 2.4.2.1 Global threshold 2.4.2.2 Positive Negative Gradient 2.4.2.3 Local Threshold 2.4.2.4 Symmetry Local Threshold 2.4.2.5 Double Extraction using Symmetry Local Threshold 2.4.2.6 Gaussian Filter 2.4.3 Experimental Results 2.4.4 Summary 3 Fusing Road Detection and Lane Position Estimation for the Robust Road Boundary Estimation 3.1 Introduction 3.2 Chromaticity-based Flood-fill Method 3.2.1 Illuminant-Invariant Space 3.2.2 Road Pixel Selection 3.2.3 Flood-fill Algorithm 3.3 Lane Position Estimation 3.3.1 Lane Marking Extraction 3.3.2 Proposed Lane Position Detection Algorithm 3.3.3 Birds-eye View Transformation by using the Proposed Dynamic Homography Matrix Generation 3.3.4 Next Lane Position Estimation based on the Cross-ratio 3.3.5 Forward-looking View Transformation 3.4 Information Fusion Between Road Detection and Lane Position Estimation 3.4.1 The Case of Detection Failures 3.4.2 The Benefit of Information Fusion 3.5 Experiments and Evaluation 3.6 Summary 4 Accurate Inter-Vehicle Distance Measurement based on Monocular Camera and Line Laser 4.1 Introduction 4.2 Proposed Distance Measurement Algorithm 4.3 Experiments and Evaluation 4.3.1 Experimental System Set-up 4.3.2 Experimental Results 4.4 Summary 5 ConclusionDocto

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Visual Analysis of Maya Glyphs via Crowdsourcing and Deep Learning

    Get PDF
    In this dissertation, we study visual analysis methods for complex ancient Maya writings. The unit sign of a Maya text is called glyph, and may have either semantic or syllabic significance. There are over 800 identified glyph categories, and over 1400 variations across these categories. To enable fast manipulation of data by scholars in Humanities, it is desirable to have automatic visual analysis tools such as glyph categorization, localization, and visualization. Analysis and recognition of glyphs are challenging problems. The same patterns may be observed in different signs but with different compositions. The inter-class variance can thus be significantly low. On the opposite, the intra-class variance can be high, as the visual variants within the same semantic category may differ to a large extent except for some patterns specific to the category. Another related challenge of Maya writings is the lack of a large dataset to study the glyph patterns. Consequently, we study local shape representations, both knowledge-driven and data-driven, over a set of frequent syllabic glyphs as well as other binary shapes, i.e. sketches. This comparative study indicates that a large data corpus and a deep network architecture are needed to learn data-driven representations that can capture the complex compositions of local patterns. To build a large glyph dataset in a short period of time, we study a crowdsourcing approach as an alternative to time-consuming data preparation of experts. Specifically, we work on individual glyph segmentation out of glyph-blocks from the three remaining codices (i.e. folded bark pages painted with a brush). With gradual steps in our crowdsourcing approach, we observe that providing supervision and careful task design are key aspects for non-experts to generate high-quality annotations. This way, we obtain a large dataset (over 9000) of individual Maya glyphs. We analyze this crowdsourced glyph dataset with both knowledge-driven and data-driven visual representations. First, we evaluate two competitive knowledge-driven representations, namely Histogram of Oriented Shape Context and Histogram of Oriented Gradients. Secondly, thanks to the large size of the crowdsourced dataset, we study visual representation learning with deep Convolutional Neural Networks. We adopt three data-driven approaches: assess- ing representations from pretrained networks, fine-tuning the last convolutional block of a pretrained network, and training a network from scratch. Finally, we investigate different glyph visualization tasks based on the studied representations. First, we explore the visual structure of several glyph corpora by applying a non-linear dimensionality reduction method, namely t-distributed Stochastic Neighborhood Embedding, Secondly, we propose a way to inspect the discriminative parts of individual glyphs according to the trained deep networks. For this purpose, we use the Gradient-weighted Class Activation Mapping method and highlight the network activations as a heatmap visualization over an input image. We assess whether the highlighted parts correspond to distinguishing parts of glyphs in a perceptual crowdsourcing study. Overall, this thesis presents a promising crowdsourcing approach, competitive data-driven visual representations, and interpretable visualization methods that can be applied to explore various other Digital Humanities datasets

    Contribution à l'analyse de la dynamique des écritures anciennes pour l'aide à l'expertise paléographique

    Get PDF
    Mes travaux de thèse s inscrivent dans le cadre du projet ANR GRAPHEM1 (Graphemebased Retrieval and Analysis for PaleograpHic Expertise of Middle Age Manuscripts). Ilsprésentent une contribution méthodologique applicable à l'analyse automatique des écrituresanciennes pour assister les experts en paléographie dans le délicat travail d étude et dedéchiffrage des écritures.L objectif principal est de contribuer à une instrumetation du corpus des manuscritsmédiévaux détenus par l Institut de Recherche en Histoire des Textes (IRHT Paris) en aidantles paléographes spécialisés dans ce domaine dans leur travail de compréhension de l évolutiondes formes de l écriture par la mise en place de méthodes efficaces d accès au contenu desmanuscrits reposant sur une analyse fine des formes décrites sous la formes de petits fragments(les graphèmes). Dans mes travaux de doctorats, j ai choisi d étudier la dynamique del élément le plus basique de l écriture appelé le ductus2 et qui d après les paléographes apportebeaucoup d informations sur le style d écriture et l époque d élaboration du manuscrit.Mes contributions majeures se situent à deux niveaux : une première étape de prétraitementdes images fortement dégradées assurant une décomposition optimale des formes en graphèmescontenant l information du ductus. Pour cette étape de décomposition des manuscrits, nousavons procédé à la mise en place d une méthodologie complète de suivi de traits à partir del extraction d un squelette obtenu à partir de procédures de rehaussement de contraste et dediffusion de gradients. Le suivi complet du tracé a été obtenu à partir de l application des règlesfondamentales d exécution des traits d écriture, enseignées aux copistes du Moyen Age. Il s agitd information de dynamique de formation des traits portant essentiellement sur des indicationsde directions privilégiées.Dans une seconde étape, nous avons cherché à caractériser ces graphèmes par desdescripteurs de formes visuelles compréhensibles à la fois par les paléographes et lesinformaticiens et garantissant une représentation la plus complète possible de l écriture d unpoint de vue géométrique et morphologique. A partir de cette caractérisation, nous avonsproposé une approche de clustering assurant un regroupement des graphèmes en classeshomogènes par l utilisation d un algorithme de classification non-supervisé basée sur lacoloration de graphe. Le résultat du clustering des graphèmes a conduit à la formation dedictionnaires de formes caractérisant de manière individuelle et discriminante chaque manuscrittraité. Nous avons également étudié la puissance discriminatoire de ces descripteurs afin d obtenir la meilleure représentation d un manuscrit en dictionnaire de formes. Cette étude a étéfaite en exploitant les algorithmes génétiques par leur capacité à produire de bonne sélection decaractéristiques.L ensemble de ces contributions a été testé à partir d une application CBIR sur trois bases demanuscrits dont deux médiévales (manuscrits de la base d Oxford et manuscrits de l IRHT, baseprincipale du projet), et une base comprenant de manuscrits contemporains utilisée lors de lacompétition d identification de scripteurs d ICDAR 2011. L exploitation de notre méthode dedescription et de classification a été faite sur une base contemporaine afin de positionner notrecontribution par rapport aux autres travaux relevant du domaine de l identification d écritures etétudier son pouvoir de généralisation à d autres types de documents. Les résultats trèsencourageants que nous avons obtenus sur les bases médiévales et la base contemporaine, ontmontré la robustesse de notre approche aux variations de formes et de styles et son caractèrerésolument généralisable à tout type de documents écrits.My thesis work is part of the ANR GRAPHEM Project (Grapheme based Retrieval andAnalysis for Expertise paleographic Manuscripts of Middle Age). It represents a methodologicalcontribution applicable to the automatic analysis of ancient writings to assist the experts inpaleography in the delicate work of the studying and deciphering the writing.The main objective is to contribute to an instrumentation of the corpus of medievalmanuscripts held by Institut de Recherche en Histoire de Textes (IRHT-Paris), by helping thepaleographers specialized in this field in their work of understanding the evolution of forms inthe writing, with the establishment of effective methods to access the contents of manuscriptsbased on a fine analysis of the forms described in the form of small fragments (graphemes). Inmy PhD work, I chose to study the dynamic of the most basic element of the writing called theductus and which according to the paleographers, brings a lot of information on the style ofwriting and the era of the elaboration of the manuscript.My major contribution is situated at two levels: a first step of preprocessing of severelydegraded images to ensure an optimal decomposition of the forms into graphemes containingthe ductus information. For this decomposition step of manuscripts, we have proceeded to theestablishment of a complete methodology for the tracings of strokes by the extraction of theskeleton obtained from the contrast enhancement and the diffusion of the gradient procedures.The complete tracking of the strokes was obtained from the application of fundamentalexecution rules of the strokes taught to the scribes of the Middle Ages. It is related to thedynamic information of the formation of strokes focusing essentially on indications of theprivileged directions.In a second step, we have tried to characterize the graphemes by visual shape descriptorsunderstandable by both the computer scientists and the paleographers and thus unsuring themost complete possible representation of the wrting from a geometrical and morphological pointof view. From this characterization, we have have proposed a clustering approach insuring agrouping of graphemes into homogeneous classes by using a non-supervised classificationalgorithm based on the graph coloring. The result of the clustering of graphemes led to theformation of a codebook characterizing in an individual and discriminating way each processedmanuscript. We have also studied the discriminating power of the descriptors in order to obtaina better representation of a manuscript into a codebook. This study was done by exploiting thegenetic algorithms by their ability to produce a good feature selection.The set of the contributions was tested from a CBIR application on three databases ofmanuscripts including two medieval databases (manuscripts from the Oxford and IRHTdatabases), and database of containing contemporary manuscripts used in the writersidentification contest of ICDAR 2011. The exploitation of our description and classificationmethod was applied on a cotemporary database in order to position our contribution withrespect to other relevant works in the writrings identification domain and study itsgeneralization power to other types of manuscripts. The very encouraging results that weobtained on the medieval and contemporary databases, showed the robustness of our approachto the variations of the shapes and styles and its resolutely generalized character to all types ofhandwritten documents.PARIS5-Bibliotheque electronique (751069902) / SudocSudocFranceF
    corecore