762 research outputs found

    A complete document analysis anda recognition system for GNU/Linux

    Get PDF
    Os motores de Reconhecimento Óptico de Caracteres (OCR) comuns simples­ mente "lêm" uma imagem não considerando a sua estrutura ou formatação. A formatação de um documento é um assunto muito importante na compreensão de um documento. Assim, o uso de motores de OCR não é suficiente para converter fielmente uma imagem de um documento para um formato electrónico. A Análise e Reconhecimento de Documentos (DAR) engloba a tarefa de reconhecer a estrutura de um documento o que, combinado com um motor de OCR, pode resultar numa conversão fiel de um documento para um formato editável. Estes sistemas existem como aplicações comerciais sem uma verdadeira equivalência em Software Livre actualmente e não estão disponíveis para o sistema operativo GNU/Linux. O trabalho descrito neste relatório tenta responder a este problema ao oferecer uma solução que combina componentes de Software Livre e sendo comparável, mesmo na sua fase inicial, a soluções comerciais disponíveis. /ABSTRACT; Regular OCR engines simply "read" an image not considering its structure or layout. A document's layout is a very important matter in the understanding of a document. Hence, using OCR engines is not enough to fairly convert an image of a document to an editable format. Document Analysis and Recognition (DAR) encompasses the task of recognizing a document's structure which combined with an OCR engine can result in a fair conversion of a document to an editable format. Such systems exist as commercial applications with no real equivalence in Free Software nowadays and are not available for the GNU/Linux operating system. The work described in this report attempts to answer this problem by offering a solution combining only Free Software components and being comparable, even in its early stage, to available commercial solutions

    Feature Type Analysis in Automated Genre Classification

    Get PDF
    In this paper, we compare classifiers based on language model, image, and stylistic features for automated genre classification. The majority of previous studies in genre classification have created models based on an amalgamated representation of a document using a multitude of features. In these models, the inseparable roles of different features make it difficult to determine a means of improving the classifier when it exhibits poor performance in detecting selected genres. By independently modeling and comparing classifiers based on features belonging to three types, describing visual, stylistic, and topical properties, we demonstrate that different genres have distinctive feature strengths.

    SIGHT - A Tool for Building Multi-Media Structured-Document Interactive Editing and Formatting Applications

    Get PDF
    SIGHT is a tool for building applications that edit and format multi-media structured documents. The media supported include text, line graphics, handwriting, images and audio. These information media are maintained in a single integrated hierarchical database. The document architecture models documents as trees in which nodes can be shared, i.e., as directed acyclic graphs. For each document there is a logical (or abstract) represention tree and one or more physical (or layout) representation trees. A physical representation is the result of applying the formatter to a logical representation. Both trees are separate but share document content data. The physical representation is displayable and printable, but all editing effectively occurs in the logical representation. Any number of document types can be supported. A document type is defined by the node types it can contain, by how these node types can be hierarchically organized, by what each node type can contain and by the format specifications used in formatting the document. SIGHT provides applications a language to define new document types, a Core Editor, various specialized editors and a formatter. The Core Editor is further subdivided into a generic Tree Editor and a generic Node Editor. Both are not limited by document types but are sensitive to them. The Core Editor is the primary editing system

    Advanced document analysis and automatic classification of PDF documents

    Get PDF
    This thesis explores the domain of document analysis and document classification within the PDF document environment The main focus is the creation of a document classification technique which can identify the logical class of a PDF document and so provide necessary information to document class specific algorithms (such as document understanding techniques). The thesis describes a page decomposition technique which is tailored to render the information contained in an unstructured PDF file into a set of blocks. The new technique is based on published research but contains many modifications which enable it to competently analyse the internal document model of PDF documents. A new level of document processing is presented: advanced document analysis. The aim of advanced document analysis is to extract information from the PDF file which can be used to help identify the logical class of that PDF file. A blackboard framework is used in a process of block labelling in which the blocks created from earlier segmentation techniques are classified into one of eight basic categories. The blackboard's knowledge sources are programmed to find recurring patterns amongst the document's blocks and formulate document-specific heuristics which can be used to tag those blocks. Meaningful document features are found from three information sources: a statistical evaluation of the document's esthetic components; a logical based evaluation of the labelled document blocks and an appearance based evaluation of the labelled document blocks. The features are used to train and test a neural net classification system which identifies the recurring patterns amongst these features for four basic document classes: newspapers; brochures; forms and academic documents. In summary this thesis shows that it is possible to classify a PDF document (which is logically unstructured) into a basic logical document class. This has important ramifications for document processing systems which have traditionally relied upon a priori knowledge of the logical class of the document they are processing

    Automatic document classification and extraction system (ADoCES)

    Get PDF
    Document processing is a critical element of office automation. Document image processing begins from the Optical Character Recognition (OCR) phase with complex processing for document classification and extraction. Document classification is a process that classifies an incoming document into a particular predefined document type. Document extraction is a process that extracts information pertinent to the users from the content of a document and assigns the information as the values of the “logical structure” of the document type. Therefore, after document classification and extraction, a paper document will be represented in its digital form instead of its original image file format, which is called a frame instance. A frame instance is an operable and efficient form that can be processed and manipulated during document filing and retrieval. This dissertation describes a system to support a complete procedure, which begins with the scanning of the paper document into the system and ends with the output of an effective digital form of the original document. This is a general-purpose system with “learning” ability and, therefore, it can be adapted easily to many application domains. In this dissertation, the “logical closeness” segmentation method is proposed. A novel representation of document layout structure - Labeled Directed Weighted Graph (LDWG) and a methodology of transforming document segmentation into LDWG representation are described. To find a match between two LDWGs, string representation matching is applied first instead of doing graph comparison directly, which reduces the time necessary to make the comparison. Applying artificial intelligence, the system is able to learn from experiences and build samples of LDWGs to represent each document type. In addition, the concept of frame templates is used for the document logical structure representation. The concept of Document Type Hierarchy (DTH) is also enhanced to express the hierarchical relation over the logical structures existing among the documents

    Advanced document analysis and automatic classification of PDF documents

    Get PDF
    This thesis explores the domain of document analysis and document classification within the PDF document environment The main focus is the creation of a document classification technique which can identify the logical class of a PDF document and so provide necessary information to document class specific algorithms (such as document understanding techniques). The thesis describes a page decomposition technique which is tailored to render the information contained in an unstructured PDF file into a set of blocks. The new technique is based on published research but contains many modifications which enable it to competently analyse the internal document model of PDF documents. A new level of document processing is presented: advanced document analysis. The aim of advanced document analysis is to extract information from the PDF file which can be used to help identify the logical class of that PDF file. A blackboard framework is used in a process of block labelling in which the blocks created from earlier segmentation techniques are classified into one of eight basic categories. The blackboard's knowledge sources are programmed to find recurring patterns amongst the document's blocks and formulate document-specific heuristics which can be used to tag those blocks. Meaningful document features are found from three information sources: a statistical evaluation of the document's esthetic components; a logical based evaluation of the labelled document blocks and an appearance based evaluation of the labelled document blocks. The features are used to train and test a neural net classification system which identifies the recurring patterns amongst these features for four basic document classes: newspapers; brochures; forms and academic documents. In summary this thesis shows that it is possible to classify a PDF document (which is logically unstructured) into a basic logical document class. This has important ramifications for document processing systems which have traditionally relied upon a priori knowledge of the logical class of the document they are processing

    FVI-BD: Multiple File Extraction using Fusion Vector Investigation (FVI) in Big Data Hadoop Environment

    Get PDF
    — The Information Extraction (IE) approach extracts useful data from unstructured and semi-structured data. Big Data, with its rising volume of multidimensional unstructured data, provides new tools for IE. Traditional Information Extraction (IE) systems are incapable of appropriately handling this massive flood of unstructured data. The processing capability of current IE systems must be enhanced because to the amount and variety of Big Data. Existing IE techniques for data preparation, extraction, and transformation, as well as representations of massive amounts of multidimensional, unstructured data, must be evaluated in terms of their capabilities and limits. The proposed FVI-BD Framework for IOT device Information Extraction in Big Data. The unstructured data has cleaned and integration using POS tagging and similarity finding using LTA method. The features are extracted using TF and IDF. The Information extracted using NLP with WordNet. The classification has done with FVI algorithm.  This research paper discovered that vast data analytics may be enhanced by extracting document feature terms with synonymous similarity and increasing IE accuracy

    Collaborative software agents support for the texpros document management system

    Get PDF
    This dissertation investigates the use of active rules that are embedded in markup documents. Active rules are used in a markup representation by integrating Collaborative Software Agents with TEXPROS (abbreviation for TEXt PROcessing System) [Liu and Ng 1996] to create a powerful distributed document management system. Such markup documents with embedded active rules are called Active Documents. For fast retrieval purposes, when we need to generate a customized Internet folder organization, we first define the Folder Organization Query Language (FO-QL) to solve data categorization problems. FO-QL defines the folder organization query process that automatically retrieves links of documents deposited into folders and then constructs a folder organization in either a centralized document repository or multiple distributed document repositories. Traditional documents are stored as static data that do not provide any dynamic capabilities for accessing or interacting with the document environment. The dynamic and distributed nature of both markup data and markup rules do not merely respond to requests for information, but intelligently anticipate, adapt, and actively seek ways to support the computing processes. This outcome feature conquers the static nature of the traditional documents. An Office Automation Definition Language (OADL) with active rules is defined for constructing the TEXPROS \u27s dual modeling approach and workflow events representation. Active Documents are such agent-supported OADL documents. With embedded rules and self-describing data features, Active Documents provide capability of collaborative interactions with software agents. Data transformation and data integration are both data processing problems but little research has focused on the markup documents to generate a versatile folder organization. Some of the research merely provides manual browsing in a document repository to find the right document. This browsing is time consuming and unrealistic, especially in multiple document repositories. With FO-QL, one can create a customized folder organization on demand

    Website Personalization Based on Demographic Data

    Get PDF
    This study focuses on websites personalization based on user's demographic data. The main demographic data that used in this study are age, gender, race and occupation. These data is obtained through user profiling technique conducted during the study. Analysis of the data gathered is done to find the relationship between the user's demographic data and their preferences for a website design. These data will be used as a guideline in order to develop a website that will fulfill the visitor's need. The topic chose was Obesity. HCI issues are considered as one of the important factors in this study which are effectiveness and satisfaction. The methodologies used are website personalization process, incremental model, combination of these two methods and Cascading Style Sheet (CSS) which discussed detail in Chapter 3. After that, we will be discussing the effectiveness and evaluation of the personalization website that have been built. Last but not least, there will be conclusion that present the result of evaluation of the websites made by the respondents
    corecore