16 research outputs found

    Adaptive Analysis and Processing of Structured Multilingual Documents

    Get PDF
    Digital document processing is becoming popular for application to office and library automation, bank and postal services, publishing houses and communication management. In recent years, the demand for tools capable of searching written and spoken sources of multilingual information has increased tremendously, where the bilingual dictionary is one of the important resource to provide the required information. Processing and analysis of bilingual dictionaries brought up the challenges of dealing with many different scripts, some of which are unknown to the designer. A framework is presented to adaptively analyze and process structured multilingual documents, where adaptability is applied to every step. The proposed framework involves: (1) General word-level script identification using Gabor filter. (2) Font classification using the grating cell operator. (3) General word-level style identification using Gaussian mixture model. (4) An adaptable Hindi OCR based on generalized Hausdorff image comparison. (5) Retargetable OCR with automatic training sample creation and its applications to different scripts. (6) Bootstrapping entry segmentation, which segments each page into functional entries for parsing. Experimental results working on different scripts, such as Chinese, Korean, Arabic, Devanagari, and Khmer, demonstrate that the proposed framework can save human efforts significantly by making each phase adaptive

    A Technique for Character Segmentation in Middle zone of Handwritten Hindi words using Hybrid Approach

    Get PDF
    India is a country where people talk in multilingual and write in multi-script. Devanagari is one of the most popular scripts in India, which is used to write Hindi, Sanskrit, Sindhi, Marathi and Nepali Languages. This research work is performed on Hindi language. A large number of precious and essential documents are available in handwritten form, which needs to be converted into editable form. The existence of Optical Character Recognition (OCR) makes this task easier to convert handwritten text in editable form. Character segmentation is an important phase of OCR, which segment the characters from handwritten words. This enhances the accuracy of OCR system. In this paper a hybrid approach is used to segment the characters that contain single and multiple touching characters within a word. The proposed system is tested on a dataset of various handwritten words written by different writers. The dataset of proposed system contains more than 300 handwritten words in Hindi language. Accuracy of the proposed hybrid system is evaluated to 96% which is better than that of existing techniques

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Contribution à l'analyse de la dynamique des écritures anciennes pour l'aide à l'expertise paléographique

    Get PDF
    Mes travaux de thèse s inscrivent dans le cadre du projet ANR GRAPHEM1 (Graphemebased Retrieval and Analysis for PaleograpHic Expertise of Middle Age Manuscripts). Ilsprésentent une contribution méthodologique applicable à l'analyse automatique des écrituresanciennes pour assister les experts en paléographie dans le délicat travail d étude et dedéchiffrage des écritures.L objectif principal est de contribuer à une instrumetation du corpus des manuscritsmédiévaux détenus par l Institut de Recherche en Histoire des Textes (IRHT Paris) en aidantles paléographes spécialisés dans ce domaine dans leur travail de compréhension de l évolutiondes formes de l écriture par la mise en place de méthodes efficaces d accès au contenu desmanuscrits reposant sur une analyse fine des formes décrites sous la formes de petits fragments(les graphèmes). Dans mes travaux de doctorats, j ai choisi d étudier la dynamique del élément le plus basique de l écriture appelé le ductus2 et qui d après les paléographes apportebeaucoup d informations sur le style d écriture et l époque d élaboration du manuscrit.Mes contributions majeures se situent à deux niveaux : une première étape de prétraitementdes images fortement dégradées assurant une décomposition optimale des formes en graphèmescontenant l information du ductus. Pour cette étape de décomposition des manuscrits, nousavons procédé à la mise en place d une méthodologie complète de suivi de traits à partir del extraction d un squelette obtenu à partir de procédures de rehaussement de contraste et dediffusion de gradients. Le suivi complet du tracé a été obtenu à partir de l application des règlesfondamentales d exécution des traits d écriture, enseignées aux copistes du Moyen Age. Il s agitd information de dynamique de formation des traits portant essentiellement sur des indicationsde directions privilégiées.Dans une seconde étape, nous avons cherché à caractériser ces graphèmes par desdescripteurs de formes visuelles compréhensibles à la fois par les paléographes et lesinformaticiens et garantissant une représentation la plus complète possible de l écriture d unpoint de vue géométrique et morphologique. A partir de cette caractérisation, nous avonsproposé une approche de clustering assurant un regroupement des graphèmes en classeshomogènes par l utilisation d un algorithme de classification non-supervisé basée sur lacoloration de graphe. Le résultat du clustering des graphèmes a conduit à la formation dedictionnaires de formes caractérisant de manière individuelle et discriminante chaque manuscrittraité. Nous avons également étudié la puissance discriminatoire de ces descripteurs afin d obtenir la meilleure représentation d un manuscrit en dictionnaire de formes. Cette étude a étéfaite en exploitant les algorithmes génétiques par leur capacité à produire de bonne sélection decaractéristiques.L ensemble de ces contributions a été testé à partir d une application CBIR sur trois bases demanuscrits dont deux médiévales (manuscrits de la base d Oxford et manuscrits de l IRHT, baseprincipale du projet), et une base comprenant de manuscrits contemporains utilisée lors de lacompétition d identification de scripteurs d ICDAR 2011. L exploitation de notre méthode dedescription et de classification a été faite sur une base contemporaine afin de positionner notrecontribution par rapport aux autres travaux relevant du domaine de l identification d écritures etétudier son pouvoir de généralisation à d autres types de documents. Les résultats trèsencourageants que nous avons obtenus sur les bases médiévales et la base contemporaine, ontmontré la robustesse de notre approche aux variations de formes et de styles et son caractèrerésolument généralisable à tout type de documents écrits.My thesis work is part of the ANR GRAPHEM Project (Grapheme based Retrieval andAnalysis for Expertise paleographic Manuscripts of Middle Age). It represents a methodologicalcontribution applicable to the automatic analysis of ancient writings to assist the experts inpaleography in the delicate work of the studying and deciphering the writing.The main objective is to contribute to an instrumentation of the corpus of medievalmanuscripts held by Institut de Recherche en Histoire de Textes (IRHT-Paris), by helping thepaleographers specialized in this field in their work of understanding the evolution of forms inthe writing, with the establishment of effective methods to access the contents of manuscriptsbased on a fine analysis of the forms described in the form of small fragments (graphemes). Inmy PhD work, I chose to study the dynamic of the most basic element of the writing called theductus and which according to the paleographers, brings a lot of information on the style ofwriting and the era of the elaboration of the manuscript.My major contribution is situated at two levels: a first step of preprocessing of severelydegraded images to ensure an optimal decomposition of the forms into graphemes containingthe ductus information. For this decomposition step of manuscripts, we have proceeded to theestablishment of a complete methodology for the tracings of strokes by the extraction of theskeleton obtained from the contrast enhancement and the diffusion of the gradient procedures.The complete tracking of the strokes was obtained from the application of fundamentalexecution rules of the strokes taught to the scribes of the Middle Ages. It is related to thedynamic information of the formation of strokes focusing essentially on indications of theprivileged directions.In a second step, we have tried to characterize the graphemes by visual shape descriptorsunderstandable by both the computer scientists and the paleographers and thus unsuring themost complete possible representation of the wrting from a geometrical and morphological pointof view. From this characterization, we have have proposed a clustering approach insuring agrouping of graphemes into homogeneous classes by using a non-supervised classificationalgorithm based on the graph coloring. The result of the clustering of graphemes led to theformation of a codebook characterizing in an individual and discriminating way each processedmanuscript. We have also studied the discriminating power of the descriptors in order to obtaina better representation of a manuscript into a codebook. This study was done by exploiting thegenetic algorithms by their ability to produce a good feature selection.The set of the contributions was tested from a CBIR application on three databases ofmanuscripts including two medieval databases (manuscripts from the Oxford and IRHTdatabases), and database of containing contemporary manuscripts used in the writersidentification contest of ICDAR 2011. The exploitation of our description and classificationmethod was applied on a cotemporary database in order to position our contribution withrespect to other relevant works in the writrings identification domain and study itsgeneralization power to other types of manuscripts. The very encouraging results that weobtained on the medieval and contemporary databases, showed the robustness of our approachto the variations of the shapes and styles and its resolutely generalized character to all types ofhandwritten documents.PARIS5-Bibliotheque electronique (751069902) / SudocSudocFranceF

    Computergestützte Inhaltsanalyse von digitalen Videoarchiven

    Full text link
    Der Übergang von analogen zu digitalen Videos hat in den letzten Jahren zu großen Veränderungen innerhalb der Filmarchive geführt. Insbesondere durch die Digitalisierung der Filme ergeben sich neue Möglichkeiten für die Archive. Eine Abnutzung oder Alterung der Filmrollen ist ausgeschlossen, so dass die Qualität unverändert erhalten bleibt. Zudem wird ein netzbasierter und somit deutlich einfacherer Zugriff auf die Videos in den Archiven möglich. Zusätzliche Dienste stehen den Archivaren und Anwendern zur Verfügung, die erweiterte Suchmöglichkeiten bereitstellen und die Navigation bei der Wiedergabe erleichtern. Die Suche innerhalb der Videoarchive erfolgt mit Hilfe von Metadaten, die weitere Informationen über die Videos zur Verfügung stellen. Ein großer Teil der Metadaten wird manuell von Archivaren eingegeben, was mit einem großen Zeitaufwand und hohen Kosten verbunden ist. Durch die computergestützte Analyse eines digitalen Videos ist es möglich, den Aufwand bei der Erzeugung von Metadaten für Videoarchive zu reduzieren. Im ersten Teil dieser Dissertation werden neue Verfahren vorgestellt, um wichtige semantische Inhalte der Videos zu erkennen. Insbesondere werden neu entwickelte Algorithmen zur Erkennung von Schnitten, der Analyse der Kamerabewegung, der Segmentierung und Klassifikation von Objekten, der Texterkennung und der Gesichtserkennung vorgestellt. Die automatisch ermittelten semantischen Informationen sind sehr wertvoll, da sie die Arbeit mit digitalen Videoarchiven erleichtern. Die Informationen unterstützen nicht nur die Suche in den Archiven, sondern führen auch zur Entwicklung neuer Anwendungen, die im zweiten Teil der Dissertation vorgestellt werden. Beispielsweise können computergenerierte Zusammenfassungen von Videos erzeugt oder Videos automatisch an die Eigenschaften eines Abspielgerätes angepasst werden. Ein weiterer Schwerpunkt dieser Dissertation liegt in der Analyse historischer Filme. Vier europäische Filmarchive haben eine große Anzahl historischer Videodokumentationen zur Verfügung gestellt, welche Anfang bis Mitte des letzten Jahrhunderts gedreht und in den letzten Jahren digitalisiert wurden. Durch die Lagerung und Abnutzung der Filmrollen über mehrere Jahrzehnte sind viele Videos stark verrauscht und enthalten deutlich sichtbare Bildfehler. Die Bildqualität der historischen Schwarz-Weiß-Filme unterscheidet sich signifikant von der Qualität aktueller Videos, so dass eine verlässliche Analyse mit bestehenden Verfahren häufig nicht möglich ist. Im Rahmen dieser Dissertation werden neue Algorithmen vorgestellt, um eine zuverlässige Erkennung von semantischen Inhalten auch in historischen Videos zu ermöglichen

    FEATURE EXTRACTION AND CLASSIFICATION THROUGH ENTROPY MEASURES

    Get PDF
    Entropy is a universal concept that represents the uncertainty of a series of random events. The notion \u201centropy" is differently understood in different disciplines. In physics, it represents the thermodynamical state variable; in statistics it measures the degree of disorder. On the other hand, in computer science, it is used as a powerful tool for measuring the regularity (or complexity) in signals or time series. In this work, we have studied entropy based features in the context of signal processing. The purpose of feature extraction is to select the relevant features from an entity. The type of features depends on the signal characteristics and classification purpose. Many real world signals are nonlinear and nonstationary and they contain information that cannot be described by time and frequency domain parameters, instead they might be described well by entropy. However, in practice, estimation of entropy suffers from some limitations and is highly dependent on series length. To reduce this dependence, we have proposed parametric estimation of various entropy indices and have derived analytical expressions (when possible) as well. Then we have studied the feasibility of parametric estimations of entropy measures on both synthetic and real signals. The entropy based features have been finally employed for classification problems related to clinical applications, activity recognition, and handwritten character recognition. Thus, from a methodological point of view our study deals with feature extraction, machine learning, and classification methods. The different versions of entropy measures are found in the literature for signals analysis. Among them, approximate entropy (ApEn), sample entropy (SampEn) followed by corrected conditional entropy (CcEn) are mostly used for physiological signals analysis. Recently, entropy features are used also for image segmentation. A related measure of entropy is Lempel-Ziv complexity (LZC), which measures the complexity of a time-series, signal, or sequences. The estimation of LZC also relies on the series length. In particular, in this study, analytical expressions have been derived for ApEn, SampEn, and CcEn of an auto-regressive (AR) models. It should be mentioned that AR models have been employed for maximum entropy spectral estimation since many years. The feasibility of parametric estimates of these entropy measures have been studied on both synthetic series and real data. In feasibility study, the agreement between numeral estimates of entropy and estimates obtained through a certain number of realizations of the AR model using Montecarlo simulations has been observed. This agreement or disagreement provides information about nonlinearity, nonstationarity, or nonGaussinaity presents in the series. In some classification problems, the probability of agreement or disagreement have been proved as one of the most relevant features. VII After feasibility study of the parametric entropy estimates, the entropy and related measures have been applied in heart rate and arterial blood pressure variability analysis. The use of entropy and related features have been proved more relevant in developing sleep classification, handwritten character recognition, and physical activity recognition systems. The novel methods for feature extraction researched in this thesis give a good classification or recognition accuracy, in many cases superior to the features reported in the literature of concerned application domains, even with less computational costs

    EVALITA Evaluation of NLP and Speech Tools for Italian - December 17th, 2020

    Get PDF
    Welcome to EVALITA 2020! EVALITA is the evaluation campaign of Natural Language Processing and Speech Tools for Italian. EVALITA is an initiative of the Italian Association for Computational Linguistics (AILC, http://www.ai-lc.it) and it is endorsed by the Italian Association for Artificial Intelligence (AIxIA, http://www.aixia.it) and the Italian Association for Speech Sciences (AISV, http://www.aisv.it)
    corecore