31 research outputs found

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Automatic extraction of facts, relations, and entities for web-scale knowledge base population

    Get PDF
    Equipping machines with knowledge, through the construction of machinereadable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms. This means that knowledge bases must be extracted and synthesized from natural language text. When the source of text is the Web, extraction methods must cope with ambiguity, noise, scale, and updates. The goal of this dissertation is to develop knowledge base population methods that address the afore mentioned characteristics of Web text. The dissertation makes three contributions. The first contribution is a method for mining high-quality facts at scale, through distributed constraint reasoning and a pattern representation model that is robust against noisy patterns. The second contribution is a method for mining a large comprehensive collection of relation types beyond those commonly found in existing knowledge bases. The third contribution is a method for extracting facts from dynamic Web sources such as news articles and social media where one of the key challenges is the constant emergence of new entities. All methods have been evaluated through experiments involving Web-scale text collections.Maschinenlesbare Wissensbasen sind ein zentraler Baustein für semantische Suche, maschinelles Übersetzen, automatisches Beantworten von Fragen und andere komplexe Fragestellungen der Künstlichen Intelligenz. Allerdings findet man menschliches Wissen bis dato überwiegend in Büchern und anderen natürlichsprachigen Texten. Das hat zur Folge, dass Wissensbasen durch automatische Extraktion aus Texten erstellt werden müssen. Bei Texten aus dem Web müssen Extraktionsmethoden mit einem hohen Maß an Mehrdeutigkeit und Rauschen sowie mit sehr großen Datenvolumina und häufiger Aktualisierung zurechtkommen. Das Ziel dieser Dissertation ist, Methoden zu entwickeln, die die automatische Erstellung von Wissensbasen unter den zuvor genannten Unwägbarkeiten von Texten aus dem Web ermöglichen. Die Dissertation leistet dazu drei Beiträge. Der erste Beitrag ist ein skalierbar verteiltes Verfahren, das die effiziente Extraktion hochwertiger Fakten unterstützt, indem logische Inferenzen mit robuster Textmustererkennung kombiniert werden. Der zweite Beitrag der Arbeit ist eine Methodik zur automatischen Konstruktion einer umfassenden Sammlung typisierter Relationen, die weit über die in existierenden Wissensbasen bekannten Relationen hinausgeht. Der dritte Beitrag ist ein neuartiges Verfahren zur Extraktion von Fakten aus dynamischen Webinhalten wie Nachrichtenartikeln und sozialen Medien. Insbesondere werden Lösungen vorgestellt zur Erkennung und Registrierung neuer Entitäten, die bislang in keiner Wissenbasis enthalten sind. Alle Verfahren wurden durch umfassende Experimente auf großen Text und Webkorpora evaluiert

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Decentralized link analysis in peer-to-peer web search networks

    Get PDF
    Analyzing the authority or reputation of entities that are connected by a graph structure and ranking these entities is an important issue that arises in the Web, in Web 2.0 communities, and in other applications. The problem is typically addressed by computing the dominant eigenvector of a matrix that is suitably derived from the underlying graph, or by performing a full spectral decomposition of the matrix. Although such analyses could be performed by a centralized server, there are good reasons that suggest running theses computations in a decentralized manner across many peers, like scalability, privacy, censorship, etc. There exist a number of approaches for speeding up the analysis by partitioning the graph into disjoint fragments. However, such methods are not suitable for a peer-to-peer network, where overlap among the fragments might occur. In addition, peer-to-peer approaches need to consider network characteristics, such as peers unaware of other peers' contents, susceptibility to malicious attacks, and network dynamics (so-called churn). In this thesis we make the following major contributions. We present JXP, a decentralized algorithm for computing authority scores of entities distributed in a peer-to-peer (P2P) network that allows peers to have overlapping content and requires no a priori knowledge of other peers' content. We also show the benets of JXP in the Minerva distributed Web search engine. We present an extension of JXP, coined TrustJXP, that contains a reputation model in order to deal with misbehaving peers. We present another extension of JXP, that handles dynamics on peer-to-peer networks, as well as an algorithm for estimating the current number of entities in the network. This thesis also presents novel methods for embedding JXP in peer-to-peer networks and applications. We present an approach for creating links among peers, forming semantic overlay networks, where peers are free to decide which connections they create and which they want to avoid based on various usefulness estimators. We show how peer-to-peer applications, like the JXP algorithm, can greatly benet from these additional semantic relations.Die Berechnung von Autoritäts- oder Reputationswerten für Knoten eines Graphen, welcher verschiedene Entitäten verknüpft, ist von großem Interesse in Web-Anwendungen, z.B. in der Analyse von Hyperlinkgraphen, Web 2.0 Portalen, sozialen Netzen und anderen Anwendungen. Die Lösung des Problems besteht oftmals im Kern aus der Berechnung des dominanten Eigenvektors einer Matrix, die vom zugrunde liegenden Graphen abgeleitet wird. Obwohl diese Analysen in einer zentralisierten Art und Weise berechnet werden können, gibt es gute Gründe, diese Berechnungen auf mehrere Knoten eines Netzwerkes zu verteilen, insbesondere bezüglich Skalierbarkeit, Datenschutz und Zensur. In der Literatur finden sich einige Methoden, welche die Berechnung beschleunigen, indem der zugrunde liegende Graph in nicht überlappende Teilgraphen zerlegt wird. Diese Annahme ist in Peer-to-Peer-System allerdings nicht realistisch, da die einzelnen Peers ihre Graphen in einer nicht synchronisierten Weise erzeugen, was inhärent zu starken oder weniger starken Überlappungen der Graphen führt. Darüber hinaus sind Peer-to-Peer-Systeme per Definition ein lose gekoppelter Zusammenschluss verschiedener Benutzer (Peers), verteilt im ganzen Internet, so dass Netzwerkcharakteristika, Netzwerkdynamik und mögliche Attacken krimineller Benutzer unbedingt berücksichtigt werden müssen. In dieser Arbeit liefern wir die folgenden grundlegenden Beiträge. Wir präsentieren JXP, einen verteilten Algorithmus für die Berechnung von Autoritätsmaßen über Entitäten in einem Peer-to-Peer Netzwerk. Wir präsentieren Trust-JXP, eine Erweiterung von JXP, ausgestattet mit einem Modell zur Berechnung von Reputationswerten, die benutzt werden, um bösartig agierende Benutzer zu identizieren. Wir betrachten, wie JXP robust gegen Veränderungen des Netzwerkes gemacht werden kann und wie die Anzahl der verschiedenen Entitäten im Netzwerk effizient geschätzt werden kann. Darüber hinaus beschreiben wir in dieser Arbeit neuartige Ansätze, JXP in bestehende Peer-to-Peer-Netzwerke einzubinden. Wir präsentieren eine Methode, mit deren Hilfe Peers entscheiden können, welche Verbindungen zu anderen Peers von Nutzen sind und welche Verbindungen vermieden werden sollen. Diese Methode basiert auf verschiedenen Qualitätsindikatoren, und wir zeigen, wie Peer-to-Peer-Anwendungen, zum Beispiel JXP, von diesen zusätzlichen Relationen profitieren können

    Top-k aggregation queries in large-scale distributed systems

    Get PDF
    Distributed top-k query processing has recently become an essential functionality in a large number of emerging application classes like Internet traffic monitoring and Peer-to-Peer Web search. This work addresses efficient algorithms for distributed top-k queries in wide-area networks where the index lists for the attribute values (or text terms) of a query are distributed across a number of data peers. More precisely, in this thesis, we make the following distributions: We present the family of KLEE algorithms that are a fundamental building-block towards efficient top-k query processing in distributed systems. We present means to model score distributions and show how these score models can be used to reason about parameter values that play an important role in the overall performance of KLEE. We present GRASS, a family of novel algorithms based on three optimization techniques significantly increased overall performance of KLEE and related algorithms. We present probabilistic guarantees for the result quality. Moreover, we present Minerva1, a distributed search engine. Minerva offers a highly distributed (in both the data dimension and the computational dimension), scalable, and efficient solution toward the development of internet-scale search engines.Top-k Anfragen spielen eine große Rolle in einer Vielzahl von Anwendungen, insbesondere im Bereich von Informationssystemen, bei denen eine kleine, sorgfältig ausgewählte Teilmenge der Ergebnisse den Benutzern präsentiert werden soll. Beispiele hierfür sind Suchmaschinen wie Google, Yahoo oder MSN. Obwohl die Forschung in diesem Bereich in den letzten Jahren große Fortschritte gemacht hat, haben Top-k-Anfragen in verteilten Systemen, bei denen die Daten auf verschiedenen Rechnern verteilt sind, vergleichsweise wenig Aufmerksamkeit erlangt. In dieser Arbeit beschäftigen wir uns mit der effizienten Verarbeitung eben dieser Anfragen. Die Hauptbeiträge gliedern sich wie folgt. Wir präsentieren KLEE, eine Familie neuartiger Top-k-Algorithmen. Wir entwickeln Modelle mit denen Datenverteilungen beschrieben werden können. Diese Modelle sind die Grundlage für eine Schätzung diverser Parameter, die einen großen Einfluss auf die Performanz von KLEE und anderen ähnlichen Algorithmen haben. Wir präsentieren GRASS, eine Familie von Algorithmen, basierend auf drei neuartigen Optimierungstechniken, mit denen die Performanz von KLEE und ähnlichen Algorithmen verbessert wird. Wir präsentieren probabilistische Garantien für die Ergebnisgüte. Wir präsentieren Minerva, eine neuartige verteilte Peer-to-Peer-Suchmaschine

    Mining photographic collections to enhance the precision and recall of search results using semantically controlled query expansion

    Get PDF
    Driven by a larger and more diverse user-base and datasets, modern Information Retrieval techniques are striving to become contextually-aware in order to provide users with a more satisfactory search experience. While text-only retrieval methods are significantly more accurate and faster to render results than purely visual retrieval methods, these latter provide a rich complementary medium which can be used to obtain relevant and different results from those obtained using text-only retrieval. Moreover, the visual retrieval methods can be used to learn the user’s context and preferences, in particular the user’s relevance feedback, and exploit them to narrow down the search to more accurate results. Despite the overall deficiency in precision of visual retrieval result, the top results are accurate enough to be used for query expansion, when expanded in a controlled manner. The method we propose overcomes the usual pitfalls of visual retrieval: 1. The hardware barrier giving rise to prohibitively slow systems. 2. Results dominated by noise. 3. A significant gap between the low-level features and the semantics of the query. In our thesis, the first barrier is overcome by employing a simple block-based visual features which outperforms a method based on MPEG-7 features specially at early precision (precision of the top results). For the second obstacle, lists from words semantically weighted according to their degree of relation to the original query or to relevance feedback from example images are formed. These lists provide filters through which the confidence in the candidate results is assessed for inclusion in the results. This allows for more reliable Pseudo-Relevance Feedback (PRF). This technique is then used to bridge the third barrier; the semantic gap. It consists of a second step query, re-querying the data set with an query expanded with weighted words obtained from the initial query, and semantically filtered (SF) without human intervention. We developed our PRF-SF method on the IAPR TC-12 benchmark dataset of 20,000 tourist images, obtaining promising results, and tested it on the different and much larger Belga benchmark dataset of approximately 500,000 news images originating from a different source. Our experiments confirmed the potential of the method in improving the overall Mean Average Precision, recall, as well as the level of diversity of the results measured using cluster recall

    Otimização de consultas SPARQL em bases RDF distribuídas

    Get PDF
    Orientadora : Profa. Dra Carmem Satie HaraTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 07/04/2017Inclui referências : f. 83-85Resumo; O modelo de dados RDF vem sendo usado em diversas aplicações devido a sua simplicidade e exibilidade na modelagem de dados quando comparado aos modelos de dados tradicionais. Dado o grande volume de dados RDF existente atualmente, diversas abordagens de processamento de consultas têm sido propostas visando garantir a escalabilidade destas aplicações. De uma forma geral, estas abordagens propõem métodos de distribuição de dados a _m de promover o processamento distribuído e paralelo de consultas SPARQL em sistemas RDF. Embora a distribuição forneça escalabilidade de armazenamento, o custo de comunicação no processamento de consultas pode ser alto. Este trabalho propõe uma abordagem de processamento de consultas SPARQL que tem o objetivo de minimizar o custo de comunicação para o processamento de consultas em sistemas RDF distribuídos. A abordagem explora a existência de padrões de alocação (PAs) na distribuição de dados, fornecida por um método de distribuição controlada de dados, que determina como triplas RDF são agrupadas e armazenadas em um mesmo servidor. Sendo assim, durante a distribuição, fragmentos de bases RDF seguem a composição de um determinado PA. Logo, a abordagem de processamento proposta gera planos de execução de consultas baseando-se nestes padrões viabilizando a escolha de duas estratégias de comunicação durante o processamento de consultas: get-frag e send-result. Na primeira estratégia, dada uma consulta, um servidor requisita para servidores remotos fragmentos de dados para a resolução de consultas. Na segunda, o servidor envia resultados intermediários da consulta para outros servidores continuarem a sua execução. Essas estratégias são combinadas em um método, denominado de 2ways, que escolhe a estratégia de comunicação adequada sempre que a execução de consultas transitar entre fragmentos de dados. A escolha da estratégia depende do número de mensagens e do volume de dados a ser transmitido entre servidores. Resultados experimentais mostram que 2ways reduz o custo de comunicação de maneira efetiva e melhora o tempo de resposta do processamento de consultas SPARQL em sistemas RDF distribuídos. Por fim, considerando que bases RDF podem ser alteradas por meio de operações de exclusão/interseção de triplas, este trabalho estende a abordagem de processamento proposta considerando que nem sempre novos dados inseridos estarão de acordo com os PAs predefinidos. A abordagem de atualização define um tipo especial de PA, denominado de PaOverow, para o armazenamento de dados que não podem ser categorizados pelos PAs existentes. Logo, o PaOverow também deve ser considerado no planejamento e no processamento de consultas. Um estudo experimental inicial mostra que, como esperado, a adoção do PaOverow pode aumentar o tempo de resposta de consultas na abordagem de processamento proposta. Palavras-chave: RDF, SPARQL, Processamento Distribuído de Consultas, Otimização de Consultas.Abstract: RDF has been used by many applications due to its simplicity and exibility in data modeling. Due to the huge volume of RDF data that exists nowadays, many distributed query processing approaches have been proposed aiming to ensure scalability for these applications. In general, these approaches propose data distribution methods promoting distributed and parallel SPARQL query processing. However, while distribution may provide storage scalability, it may also incur high communication costs for processing queries. This work presents a parallel and distributed query processing approach that aims to minimize the communication cost. The approach explores the existence of data allocation patterns (PAs) for data distribution, provided by a controlled data distribution method, that determine how RDF triples should be grouped and stored on the same server. Fragments of the RDF datastore follow a given allocation pattern. The approach generates execution plans based on this distribution model making possible the choice of two communication strategies for query processing: get-frag and send-result. With the get-frag approach, a server requests remote servers to send fragments that contain data required by a query. The send-result approach, on the other hand, forwards intermediate results to other servers to continue the query processing. These strategies are combined on a method, called 2ways, that chooses the adequate communication strategy whenever queries traverse fragment boundaries. The choice of the communication strategy is based on the number of requisitions and the volume of the data to be transmitted. Experimental results show that our proposed technique e_ectively reduces the communication cost and improves the response time for processing SPARQL queries on a distributed RDF datastore. Finally, considering that RDF datasets are dynamic, and may be updated by delete/insert operations, this work extends the query processing approach considering that not all newly inserted data may conform to the prede_ned allocation patterns. We de_ne a special purpose type of PA, called PaOverow, for storing data that can not be categorized by existing PAs. Consequentelly, the PaOverow must be considered in query planning and processing. An initial experimental study shows that, as expected, the PaOverow adoption can increase the response time for processing queries on the proposed processing approach. Keywords: RDF, SPARQL, Distributed Query Processing, Query Optimization

    Automated Structural and Spatial Comprehension of Data Tables

    Get PDF
    Data tables on the Web hold large quantities of information, but are difficult to search, browse, and merge using existing systems. This dissertation presents a collection of techniques for extracting, processing, and querying tables that contain geographic data, by harnessing the coherence of table structures for retrieval tasks. Data tables, including spreadsheets, HTML tables, and those found in rich document formats, are the standard way of communicating structured data for typical computer users. Notably, geographic tables (i.e., those containing names of locations) constitute a large fraction of publicly-available data tables and are ripe for exposure to Internet users who are increasingly comfortable interacting with geographic data using web-based maps. Of particular interest is the creation of a large repository of geographic data tables that would enable novel queries such as "find vacation itineraries geographically similar to mine" for use in trip planning or "find demographic datasets that cover regions X, Y, and Z" for sociological research. In support of these goals, this dissertation identifies several methods for using the structure and context of data tables to improve the interpretation of the contents, even in the presence of ambiguity. First, a method for identifying functional components of data tables is presented, capitalizing on techniques for sequence labeling that are used in natural language processing. Next, a novel automated method for converting place references to physical latitude/longitude values, a process known as geotagging, is applied to tables with high accuracy. A classification procedure for identifying a specific class of geographic table, the travel itinerary, is also described, which borrows inspiration from optimization techniques for the traveling salesman problem (TSP). Finally, methods for querying spatially similar tables are introduced and several mechanisms for visualizing and interacting with the extracted geographic data are explored
    corecore