35 research outputs found

    Tab this Folder of Documents: Page Stream Segmentation of Business Documents

    Get PDF
    In the midst of digital transformation, automatically understanding the structure and composition of scanned documents is important in order to allow correct indexing, archiving, and processing. In many organizations, different types of documents are usually scanned together in folders, so it is essential to automate the task of segmenting the folders into documents which then proceed to further analysis tailored to specific document types. This task is known as Page Stream Segmentation (PSS). In this paper, we propose a deep learning solution to solve the task of determining whether or not a page is a breaking-point given a sequence of scanned pages (a folder) as input. We also provide a dataset called TABME (TAB this folder of docuMEnts) generated specifically for this task. Our proposed architecture combines LayoutLM and ResNet to exploit both textual and visual features of the document pages and achieves an F1 score of 0.953. The dataset and code used to run the experiments in this paper are available at the following web link: https://github.com/aldolipani/TABME

    Use of language models for document stream segmentation

    Get PDF
    International audiencePage stream segmentation into single documents is a very common task which is practiced in companies and administrations when processing their incoming mail. It is not a straightforward task because the limits of the documents are not always obvious, and it is not always easy to find common features between the pages of the same document. In this paper, we seek to compare existing segmentation models and propose a new segmentation one based on GRUs (Gated Recurrent Unit) and an attention mechanism, named AGRU. This model uses the text content of the previous page and the current page to determine if both pages belong to the same document. So, due to its attention mechanism, this model is capable to recognize words that define the first page of a document. Training and evaluation are carried out on two datasets: Tobacco-800 and READ-Corpus. The former is a public dataset on which our model reaches an F1 score equal to 90%, and the later is private for which our model reaches an F1 score equal to 96%

    Classification et apprentissage actif à partir d'un flux de données évolutif en présence d'étiquetage incertain

    Get PDF
    This thesis focuses on machine learning for data classification. To reduce the labelling cost, active learning allows to query the class label of only some important instances from a human labeller. We propose a new uncertainty measure that characterizes the importance of data and improves the performance of active learning compared to the existing uncertainty measures. This measure determines the smallest instance weight to associate with new data, so that the classifier changes its prediction concerning this data. We then consider a setting where the data arrives continuously from an infinite length stream. We propose an adaptive uncertainty threshold that is suitable for active learning in the streaming setting and achieves a compromise between the number of classification errors and the number of required labels. The existing stream-based active learning methods are initialized with some labelled instances that cover all possible classes. However, in many applications, the evolving nature of the stream implies that new classes can appear at any time. We propose an effective method of active detection of novel classes in a multi-class data stream. This method incrementally maintains a feature space area which is covered by the known classes, and detects those instances that are self-similar and external to that area as novel classes. Finally, it is often difficult to get a completely reliable labelling because the human labeller is subject to labelling errors that reduce the performance of the learned classifier. This problem was solved by introducing a measure that reflects the degree of disagreement between the manually given class and the predicted class, and a new informativeness measure that expresses the necessity for a mislabelled instance to be re-labeled by an alternative labeller.Cette thèse traite de l'apprentissage automatique pour la classification de données. Afin de réduire le coût de l'étiquetage, l'apprentissage actif permet de formuler des requêtes pour demander à un opérateur d'étiqueter seulement quelques données choisies selon un critère d'importance. Nous proposons une nouvelle mesure d'incertitude qui permet de caractériser l'importance des données et qui améliore les performances de l'apprentissage actif par rapport aux mesures existantes. Cette mesure détermine le plus petit poids nécessaire à associer à une nouvelle donnée pour que le classifieur change sa prédiction concernant cette donnée. Nous intégrons ensuite le fait que les données à traiter arrivent en continu dans un flux de longueur infinie. Nous proposons alors un seuil d'incertitude adaptatif qui convient pour un apprentissage actif à partir d'un flux de données et qui réalise un compromis entre le nombre d'erreurs de classification et le nombre d'étiquettes de classes demandées. Les méthodes existantes d'apprentissage actif à partir de flux de données, sont initialisées avec quelques données étiquetées qui couvrent toutes les classes possibles. Cependant, dans de nombreuses applications, la nature évolutive du flux fait que de nouvelles classes peuvent apparaître à tout moment. Nous proposons une méthode efficace de détection active de nouvelles classes dans un flux de données multi-classes. Cette méthode détermine de façon incrémentale une zone couverte par les classes connues, et détecte les données qui sont extérieures à cette zone et proches entre elles, comme étant de nouvelles classes. Enfin, il est souvent difficile d'obtenir un étiquetage totalement fiable car l'opérateur humain est sujet à des erreurs d'étiquetage qui réduisent les performances du classifieur appris. Cette problématique a été résolue par l'introduction d'une mesure qui reflète le degré de désaccord entre la classe donnée manuellement et la classe prédite et une nouvelle mesure d'"informativité" permettant d'exprimer la nécessité pour une donnée mal étiquetée d'être ré-étiquetée par un opérateur alternatif

    Classification et apprentissage actif à partir d'un flux de données évolutif en présence d'étiquetage incertain

    Get PDF
    This thesis focuses on machine learning for data classification. To reduce the labelling cost, active learning allows to query the class label of only some important instances from a human labeller. We propose a new uncertainty measure that characterizes the importance of data and improves the performance of active learning compared to the existing uncertainty measures. This measure determines the smallest instance weight to associate with new data, so that the classifier changes its prediction concerning this data. We then consider a setting where the data arrives continuously from an infinite length stream. We propose an adaptive uncertainty threshold that is suitable for active learning in the streaming setting and achieves a compromise between the number of classification errors and the number of required labels. The existing stream-based active learning methods are initialized with some labelled instances that cover all possible classes. However, in many applications, the evolving nature of the stream implies that new classes can appear at any time. We propose an effective method of active detection of novel classes in a multi-class data stream. This method incrementally maintains a feature space area which is covered by the known classes, and detects those instances that are self-similar and external to that area as novel classes. Finally, it is often difficult to get a completely reliable labelling because the human labeller is subject to labelling errors that reduce the performance of the learned classifier. This problem was solved by introducing a measure that reflects the degree of disagreement between the manually given class and the predicted class, and a new informativeness measure that expresses the necessity for a mislabelled instance to be re-labeled by an alternative labeller.Cette thèse traite de l'apprentissage automatique pour la classification de données. Afin de réduire le coût de l'étiquetage, l'apprentissage actif permet de formuler des requêtes pour demander à un opérateur d'étiqueter seulement quelques données choisies selon un critère d'importance. Nous proposons une nouvelle mesure d'incertitude qui permet de caractériser l'importance des données et qui améliore les performances de l'apprentissage actif par rapport aux mesures existantes. Cette mesure détermine le plus petit poids nécessaire à associer à une nouvelle donnée pour que le classifieur change sa prédiction concernant cette donnée. Nous intégrons ensuite le fait que les données à traiter arrivent en continu dans un flux de longueur infinie. Nous proposons alors un seuil d'incertitude adaptatif qui convient pour un apprentissage actif à partir d'un flux de données et qui réalise un compromis entre le nombre d'erreurs de classification et le nombre d'étiquettes de classes demandées. Les méthodes existantes d'apprentissage actif à partir de flux de données, sont initialisées avec quelques données étiquetées qui couvrent toutes les classes possibles. Cependant, dans de nombreuses applications, la nature évolutive du flux fait que de nouvelles classes peuvent apparaître à tout moment. Nous proposons une méthode efficace de détection active de nouvelles classes dans un flux de données multi-classes. Cette méthode détermine de façon incrémentale une zone couverte par les classes connues, et détecte les données qui sont extérieures à cette zone et proches entre elles, comme étant de nouvelles classes. Enfin, il est souvent difficile d'obtenir un étiquetage totalement fiable car l'opérateur humain est sujet à des erreurs d'étiquetage qui réduisent les performances du classifieur appris. Cette problématique a été résolue par l'introduction d'une mesure qui reflète le degré de désaccord entre la classe donnée manuellement et la classe prédite et une nouvelle mesure d'"informativité" permettant d'exprimer la nécessité pour une donnée mal étiquetée d'être ré-étiquetée par un opérateur alternatif

    October 18, 2003 (Pages 5163-5284)

    Get PDF

    Vereinheitlichte Anfrageverarbeitung in heterogenen und verteilten Multimediadatenbanken

    Get PDF
    Multimedia retrieval is an essential part of today's world. This situation is observable in industrial domains, e.g., medical imaging, as well as in the private sector, visible by activities in manifold Social Media platforms. This trend led to the creation of a huge environment of multimedia information retrieval services offering multimedia resources for almost any user requests. Indeed, the encompassed data is in general retrievable by (proprietary) APIs and query languages, but unfortunately a unified access is not given due to arising interoperability issues between those services. In this regard, this thesis focuses on two application scenarios, namely a medical retrieval system supporting a radiologist's workflow, as well as an interoperable image retrieval service interconnecting diverse data silos. The scientific contribution of this dissertation is split in three different parts: the first part of this thesis improves the metadata interoperability issue. Here, major contributions to a community-driven, international standardization have been proposed leading to the specification of an API and ontology to enable a unified annotation and retrieval of media resources. The second part issues a metasearch engine especially designed for unified retrieval in distributed and heterogeneous multimedia retrieval environments. This metasearch engine is capable of being operated in a federated as well as autonomous manner inside the aforementioned application scenarios. The remaining third part ensures an efficient retrieval due to the integration of optimization techniques for multimedia retrieval in the overall query execution process of the metasearch engine.Egal ob im industriellen Bereich oder auch im Social Media - multimediale Daten nehmen eine immer zentralere Rolle ein. Aus diesem fortlaufendem Entwicklungsprozess entwickelten sich umfangreiche Informationssysteme, die Daten für zahlreiche Bedürfnisse anbieten. Allerdings ist ein einheitlicher Zugriff auf jene verteilte und heterogene Landschaft von Informationssystemen in der Praxis nicht gewährleistet. Und dies, obwohl die Datenbestände meist über Schnittstellen abrufbar sind. Im Detail widmet sich diese Arbeit mit der Bearbeitung zweier Anwendungsszenarien. Erstens, einem medizinischen System zur Diagnoseunterstützung und zweitens einer interoperablen, verteilten Bildersuche. Der wissenschaftliche Teil der vorliegenden Dissertation gliedert sich in drei Teile: Teil eins befasst sich mit dem Problem der Interoperabilität zwischen verschiedenen Metadatenformaten. In diesem Bereich wurden maßgebliche Beiträge für ein internationales Standardisierungsverfahren entwickelt. Ziel war es, einer Ontologie, sowie einer Programmierschnittstelle einen vereinheitlichten Zugriff auf multimediale Informationen zu ermöglichen. In Teil zwei wird eine externe Metasuchmaschine vorgestellt, die eine einheitliche Anfrageverarbeitung in heterogenen und verteilten Multimediadatenbanken ermöglicht. In den Anwendungsszenarien wird zum einen auf eine föderative, als auch autonome Anfrageverarbeitung eingegangen. Abschließend werden in Teil drei Techniken zur Optimierung von verteilten multimedialen Anfragen präsentiert

    European Language Grid

    Get PDF
    This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 – to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects

    European Language Grid

    Get PDF
    This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 – to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects

    Automatic keyword extraction for a partial search engine index

    Get PDF
    Full-text search engines play a critical role in many enterprise applications, where the quantity and complexity of the information are overwhelming. Promptly finding documents that contain relevant information for pressing questions is a necessity for efficient operation. This is especially the case for financial and legal teams executing Mergers and Acquisitions deals. The goal of the thesis is to provide search services for such teams without storing the sensitive documents involved, minimising the risk of potential data leaks. A literature review of related methods and concepts is presented. As search engine technologies that use encrypted indices for commercial applications are still in their early stages, the solution proposed in the thesis is the use of partial indexing by keyword extraction. A cosine similarity-based evaluation was used to measure the performance difference between the keyword-based partial index and the complete index. The partial indices were constructed using unsupervised keyword extraction methods based on term frequency, document graphs, and topic modelling. The frequency-based methods were term frequency, TF-IDF, and YAKE!. The graph-based method was TextRank. The topic modelling-based methods were NMF, LDA, and LSI. The methods were evaluated by running 51 reference queries on the LEDGAR data set, which contains 60,540 contracts. The results show that using only five keywords per document from the TF-IDF or YAKE! methods, the best matching documents in the result lists have a cosine similarity of 0.7 on average. This value is reasonably high, particularly considering the small number of keywords. The topic modelling-based methods were found to perform poorly due to being too general. The term frequency and TextRank methods were mediocre
    corecore