149 research outputs found

    Thesaurus-aided learning for rule-based categorization of Ocr texts

    Full text link
    The question posed in this thesis is whether the effectiveness of the rule-based approach to automatic text categorization on OCR collections can be improved by using domain-specific thesauri. A rule-based categorizer was constructed consisting of a C++ program called C-KANT which consults documents and creates a program which can be executed by the CLIPS expert system shell. A series of tests using domain-specific thesauri revealed that a query expansion approach to rule-based automatic text categorization using domain-dependent thesauri will not improve the categorization of OCR texts. Although some improvement to categorization could be made using rules over a mixture of thesauri, the improvements were not significantly large

    An improved MultiAnts-Aodv routing protocol for ad hoc wireless networks

    Full text link
    Compared to the conventional table-driven and on-demand routing protocols, a hybrid routing protocol [71], which uses mobile agents and reactive route discovery, introduced a more realistic solution to this problem. However, the mobile agents were not fully exploited in this protocol. In this thesis research, we will propose an improved MultiAnts-AODV routing protocol based on ant-AODV The goal of our design is to reduce the end-to-end delay and route discovery latency. To achieve a better performance, the communication scheme among the agents is strengthened. We also present an improved navigation algorithm for mobile agents to update the routing tables more efficiently. We extend the routing table to reduce the latency of routing discovery in case of link failures. The simulation based comparisons among several navigation algorithms are also presented

    The Archive Query Log: Mining Millions of Search Result Pages of Hundreds of Search Engines from 25 Years of Web Archives

    Full text link
    The Archive Query Log (AQL) is a previously unused, comprehensive query log collected at the Internet Archive over the last 25 years. Its first version includes 356 million queries, 166 million search result pages, and 1.7 billion search results across 550 search providers. Although many query logs have been studied in the literature, the search providers that own them generally do not publish their logs to protect user privacy and vital business data. Of the few query logs publicly available, none combines size, scope, and diversity. The AQL is the first to do so, enabling research on new retrieval models and (diachronic) search engine analyses. Provided in a privacy-preserving manner, it promotes open research as well as more transparency and accountability in the search industry.Comment: SIGIR 2023 resource paper, 13 page

    Nomenclature and Benchmarking Models of Text Classification Models: Contemporary Affirmation of the Recent Literature

    Get PDF
    In this paper we present automated text classification in text mining that is gaining greater relevance in various fields every day Text mining primarily focuses on developing text classification systems able to automatically classify huge volume of documents comprising of unstructured and semi structured data The process of retrieval classification and summarization simplifies extract of information by the user The finding of the ideal text classifier feature generator and distinct dominant technique of feature selection leading all other previous research has received attention from researchers of diverse areas as information retrieval machine learning and the theory of algorithms To automatically classify and discover patterns from the different types of the documents 1 techniques like Machine Learning Natural Language Processing NLP and Data Mining are applied together In this paper we review some effective feature selection researches and show the results in a table for

    Living analytics methods for the social web

    Get PDF
    [no abstract

    Users, Queries, and Bad Abandonment in Web Search

    Get PDF
    After a user submits a query and receives a list of search results, the user may abandon their query without clicking on any of the search results. A bad query abandonment is when a searcher abandons the SERP because they were dissatisfied with the quality of the search results, often making the user reformulate their query in the hope of receiving better search results. As we move closer to understanding when and what causes a user to abandon their query under different qualities of search results, we move forward in an overall understanding of user behavior with search engines. In this thesis, we describe three user studies to investigate bad query abandonment. First, we report on a study to investigate the rate and time at which users abandon their queries at different levels of search quality. We had users search for answers to questions, but showed users manipulated SERPs that contain one relevant document placed at different ranks. We show that as the quality of search results decreases, the probability of abandonment increases, and that users quickly decide to abandon their queries. Users make their decisions fast, but not all users are the same. We show that there appear to be two types of users that behave differently, with one group more likely to abandon their query and are quicker in finding answers than the group less likely to abandon their query. Second, we describe an eye-tracking experiment that focuses on understanding possible causes of users' willingness to examine SERPs and what motivates users to continue or discontinue their examination. Using eye-tracking data, we found that a user deciding to abandon a query is best understood by the user's examination pattern not including a relevant search result. If a user sees a relevant result, they are very likely to click it. However, users' examination of results are different and may be influenced by other factors. The key factors we found are the rank of search results, the user type, and the query quality. For example, we show that regardless of where the relevant document is placed in the SERP, the type of query submitted affects examination, and if a user enters an ambiguous query, they are likely to examine fewer results. Third, we show how the nature of non-relevant material affects users' willingness to further explore a ranked list of search results. We constructed and showed participants manipulated SERPs with different types of non-relevant documents. We found that user examination of search results and time to query abandonment is influenced by the coherence and type of non-relevant documents included in the SERP. For SERPs coherent on off-topic results, users spend the least amount of time before abandoning and are less likely to request to view more results. The time they spend increases as the SERP quality improves, and users are more likely to request to view more results when the SERP contains diversified non-relevant results on multiple subtopics

    Menetelmiä jälleenkuvausten louhintaan

    Get PDF
    In scientific investigations data oftentimes have different nature. For instance, they might originate from distinct sources or be cast over separate terminologies. In order to gain insight into the phenomenon of interest, a natural task is to identify the correspondences that exist between these different aspects. This is the motivating idea of redescription mining, the data analysis task studied in this thesis. Redescription mining aims to find distinct common characterizations of the same objects and, vice versa, to identify sets of objects that admit multiple shared descriptions. A practical example in biology consists in finding geographical areas that admit two characterizations, one in terms of their climatic profile and one in terms of the occupying species. Discovering such redescriptions can contribute to better our understanding of the influence of climate over species distribution. Besides biology, applications of redescription mining can be envisaged in medicine or sociology, among other fields. Previously, redescription mining was restricted to propositional queries over Boolean attributes. However, many conditions, like aforementioned climate, cannot be expressed naturally in this limited formalism. In this thesis, we consider more general query languages and propose algorithms to find the corresponding redescriptions, making the task relevant to a broader range of domains and problems. Specifically, we start by extending redescription mining to non-Boolean attributes. In other words, we propose an algorithm to handle nominal and real-valued attributes natively. We then extend redescription mining to the relational setting, where the aim is to find corresponding connection patterns that relate almost the same object tuples in a network. We also study approaches for selecting high quality redescriptions to be output by the mining process. The first approach relies on an interface for mining and visualizing redescriptions interactively and allows the analyst to tailor the selection of results to meet his needs. The second approach, rooted in information theory, is a compression-based method for mining small sets of associations from two-view datasets. In summary, we take redescription mining outside the Boolean world and show its potential as a powerful exploratory method relevant in a broad range of domains.Tieteellinen tutkimusaineisto kootaan usein eri termistöä käyttävistä lähteistä. Näiden erilaisten näkökulmienvälisten vastaavuuksien ja yhteyksien tunnistaminen on luonnollinen tapa lähestyä tutkittavaa ilmiötä. Väitöskirjassa tarkastellaan juuri tähän pyrkivää data-analyysimenetelmää, jälleenkuvausten louhintaa (redescription mining). Jälleenkuvausten tavoitteena on yhtäältä kuvata samaa asiaa vaihoehtoisilla tavoilla ja toisaalta tunnistaa sellaiset asiat, joilla on useita eri kuvauksia. Jälleenkuvausten louhinnalla on mahdollisia sovelluksia mm. biologiassa, lääketieteessä ja sosiologiassa. Biologiassa voidaan esimerkiksi etsiä sellaisia maantieteellisiä alueita, joita voidaan luonnehtia kahdella vaihtoehtoisella tavalla: joko kuvaamalla alueen ilmasto tai kuvaamalla alueella elävät lajit. Esimerkiksi Skandinaviassa ja Baltiassa on ensinnäkin samankaltaiset lämpötila- ja sadeolosuhteet ja toisekseen hirvi on yhteinen laji molemmilla alueilla. Tällaisten jälleenkuvausten löytäminen voi auttaa ymmärtämään ilmaston vaikutuksia lajien levinneisyyteen. Lääketieteessä taas jälleenkuvauksilla voidaan löytää potilaiden taustatietojen sekä heidän oireidensa ja diagnoosiensa välisiä yhteyksiä, joiden avulla taas voidaan mahdollisesti paremmin ymmärtää itse sairauksia. Aiemmin jälleenkuvausten louhinnassa on rajoituttu tarkastelemaan totuusarvoisia muuttujia sekä propositionaalisia kuvauksia. Monia asioita, esimerkiksi ilmastotyyppiä, ei kuitenkaan voi luontevasti kuvata tällaisilla rajoittuneilla formalismeilla. Väitöskirjatyössä laajennetaankin jälleenkuvausten käytettävyyttä. Työssä esitetään ensimmäinen algoritmi jälleenkuvausten löytämiseen aineistoista, joissa attribuutit ovat reaalilukuarvoisia ja käsitellään ensimmäistä kertaa jälleenkuvausten etsintää relationaalisista aineistoista, joissa asiat viittaavat toisiinsa. Lisäksi väitöskirjassa tarkastellaan menetelmiä, joilla jälleenkuvausten joukosta voidaan valita kaikkein laadukkaimmat. Näihin menetelmiin kuuluvat sekä interaktiivinen käyttöliittymä jälleenkuvausten louhintaan ja visualisointiin, että informaatioteoriaan perustuvaa parametriton menetelmä parhaiden kuvausten valitsemiseksi. Kokonaisuutena väitöskirjatyössä siis laajennetaan jälleenkuvausten louhintaa totuusarvoisista muuttujista myös muunlaisten aineistojen käsittelyyn sekä osoitetaan menetelmän mahdollisuuksia monenlaisilla sovellusalueilla.Méthodes pour la fouille de redescriptions Lors de l'analyse scientifique d'un phénomène, les données disponibles sont souvent de différentes natures. Entre autres, elles peuvent provenir de différentes sources ou utiliser différentes terminologies. Découvrir des correspondances entre ces différents aspects fournit un moyen naturel de mieux comprendre le phénomène à l'étude. C'est l'idée directrice de la fouille de redescriptions (redescription mining), la méthode d'analyse de données étudiée dans cette thèse. La fouille de redescriptions a pour but de trouver diverses manières de décrire les même choses et vice versa, de trouver des choses qui ont plusieurs descriptions en commun. Un exemple en biologie consiste à déterminer des zones géographiques qui peuvent être caractérisées de deux manières, en terme de leurs conditions climatiques d'une part, et en terme des espèces animales qui y vivent d'autre part. Les régions européennes de la Scandinavie et de la Baltique, par exemple, ont des conditions de températures et de précipitations similaires et l'élan est une espèce commune aux deux régions. Identifier de telles redescriptions peut potentiellement aider à élucider l'influence du climat sur la distribution des espèces animales. Pour prendre un autre exemple, la fouille de redescriptions pourrait être appliquée en médecine, pour mettre en relation les antécédents des patients, leurs symptômes et leur diagnostic, dans le but d'améliorer notre compréhension des maladies. Auparavant, la fouille de redescriptions n'utilisait que des requêtes propositionnelles à variables booléennes. Cependant, de nombreuses conditions, telles que le climat cité ci-dessus, ne peuvent être exprimées dans ce formalisme restreint. Dans cette thèse, nous proposons un algorithme pour construire directement des redescriptions avec des variables réelles. Nous introduisons ensuite des redescriptions mettant en jeu des liens entre les objets, c'est à dire basées sur des requêtes relationnelles. Nous étudions aussi des approches pour sélectionner des redescriptions de qualité, soit en utilisant une interface permettant la fouille et la visualisation interactives des redescriptions, soit via une méthode sans paramètres motivée par des principes de la théorie de l'information. En résumé, nous étendons la fouille de redescriptions hors du monde booléen et montrons qu'elle constitue une méthode d'exploration de données puissante et pertinente dans une large variété de domaines

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen

    On Term Selection Techniques for Patent Prior Art Search

    No full text
    A patent is a set of exclusive rights granted to an inventor to protect his invention for a limited period of time. Patent prior art search involves finding previously granted patents, scientific articles, product descriptions, or any other published work that may be relevant to a new patent application. Many well-known information retrieval (IR) techniques (e.g., typical query expansion methods), which are proven effective for ad hoc search, are unsuccessful for patent prior art search. In this thesis, we mainly investigate the reasons that generic IR techniques are not effective for prior art search on the CLEF-IP test collection. First, we analyse the errors caused due to data curation and experimental settings like applying International Patent Classification codes assigned to the patent topics to filter the search results. Then, we investigate the influence of term selection on retrieval performance on the CLEF-IP prior art test collection, starting with the description section of the reference patent and using language models (LM) and BM25 scoring functions. We find that an oracular relevance feedback system, which extracts terms from the judged relevant documents far outperforms the baseline (i.e., 0.11 vs. 0.48) and performs twice as well on mean average precision (MAP) as the best participant in CLEF-IP 2010 (i.e., 0.22 vs. 0.48). We find a very clear term selection value threshold for use when choosing terms. We also notice that most of the useful feedback terms are actually present in the original query and hypothesise that the baseline system can be substantially improved by removing negative query terms. We try four simple automated approaches to identify negative terms for query reduction but we are unable to improve on the baseline performance with any of them. However, we show that a simple, minimal feedback interactive approach, where terms are selected from only the first retrieved relevant document outperforms the best result from CLEF-IP 2010, suggesting the promise of interactive methods for term selection in patent prior art search
    corecore