255 research outputs found

    A Bootstrapping architecture for time expression recognition in unlabelled corpora via syntactic-semantic patterns

    Get PDF
    In this paper we describe a semi-supervised approach to the extraction of time expression mentions in large unlabelled corpora based on bootstrapping. Bootstrapping techniques rely on a relatively small amount of initial human-supplied examples (termed “seeds”) of the type of entity or concept to be learned, in order to capture an initial set of patterns or rules from the unlabelled text that extract the supplied data. In turn, the learned patterns are employed to find new potential examples, and the process is repeated to grow the set of patterns and (optionally) the set of examples. In order to prevent the learned pattern set from producing spurious results, it becomes essential to implement a ranking and selection procedure to filter out “bad” patterns and, depending on the case, new candidate examples. Therefore, the type of patterns employed (knowledge representation) as well as the ranking and selection procedure are paramount to the quality of the results. We present a complete bootstrapping algorithm for recognition of time expressions, with a special emphasis on the type of patterns used (a combination of semantic and morpho- syntantic elements) and the ranking and selection criteria. Bootstrap- ping techniques have been previously employed with limited success for several NLP problems, both of recognition and classification, but their application to time expression recognition is, to the best of our knowledge, novel. As of this writing, the described architecture is in the final stages of implementation, with experimention and evalution being already underway.Postprint (published version

    Analysing Lexical Semantic Change with Contextualised Word Representations

    Get PDF
    This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations. We propose a novel method that exploits the BERT neural language model to obtain representations of word usages, clusters these representations into usage types, and measures change along time with three proposed metrics. We create a new evaluation dataset and show that the model representations and the detected semantic shifts are positively correlated with human judgements. Our extensive qualitative analysis demonstrates that our method captures a variety of synchronic and diachronic linguistic phenomena. We expect our work to inspire further research in this direction.Comment: To appear in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL-2020

    ColNet: Embedding the Semantics of Web Tables for Column Type Prediction

    Get PDF
    Automatically annotating column types with knowledge base(KB) concepts is a critical task to gain a basic understandingof web tables. Current methods rely on either table metadatalike column name or entity correspondences of cells in theKB, and may fail to deal with growing web tables with in-complete meta information. In this paper we propose a neu-ral network based column type annotation framework namedColNetwhich is able to integrate KB reasoning and lookupwith machine learning and can automatically train Convolu-tional Neural Networks for prediction. The prediction modelnot only considers the contextual semantics within a cell us-ing word representation, but also embeds the semantics of acolumn by learning locality features from multiple cells. Themethod is evaluated with DBPedia and two different web ta-ble datasets, T2Dv2 from the general Web and Limaye fromWikipedia pages, and achieves higher performance than thestate-of-the-art approaches

    Basic tasks of sentiment analysis

    Full text link
    Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about

    Community-Driven Engineering of the DBpedia Infobox Ontology and DBpedia Live Extraction

    Get PDF
    The DBpedia project aims at extracting information based on semi-structured data present in Wikipedia articles, interlinking it with other knowledge bases, and publishing this information as RDF freely on the Web. So far, the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the manual effort required to produce and publish a new version of the dataset – which was already partially outdated the moment it was released – has been a drawback. Additionally, the maintenance of the DBpedia Ontology, an ontology serving as a structural backbone for the extracted data, made the release cycles even more heavyweight. In the course of this thesis, we make two contributions: Firstly, we develop a wiki-based solution for maintaining the DBpedia Ontology. By allowing anyone to edit, we aim to distribute the maintenance work among the DBpedia community. Secondly, we extend DBpedia with a Live Extraction Framework, which is capable of extracting RDF data from articles that have recently been edited on the English Wikipedia. By making this RDF data automatically public in near realtime, namely via SPARQL and Linked Data, we overcome many of the drawbacks of the former release cycles

    Alignment of multi-cultural knowledge repositories

    Get PDF
    The ability to interconnect multiple knowledge repositories within a single framework is a key asset for various use cases such as document retrieval and question answering. However, independently created repositories are inherently heterogeneous, reflecting their diverse origins. Thus, there is a need to align concepts and entities across knowledge repositories. A limitation of prior work is the assumption of high afinity between the repositories at hand, in terms of structure and terminology. The goal of this dissertation is to develop methods for constructing and curating alignments between multi-cultural knowledge repositories. The first contribution is a system, ACROSS, for reducing the terminological gap between repositories. The second contribution is two alignment methods, LILIANA and SESAME, that cope with structural diversity. The third contribution, LAIKA, is an approach to compute alignments between dynamic repositories. Experiments with a suite ofWeb-scale knowledge repositories show high quality alignments. In addition, the application benefits of LILIANA and SESAME are demonstrated by use cases in search and exploration.Die FĂ€higkeit mehrere Wissensquellen in einer Anwendung miteinander zu verbinden ist ein wichtiger Bestandteil fĂŒr verschiedene Anwendungsszenarien wie z.B. dem Auffinden von Dokumenten und der Beantwortung von Fragen. UnabhĂ€ngig erstellte Datenquellen sind allerdings von Natur aus heterogen, was ihre unterschiedlichen HerkĂŒnfte widerspiegelt. Somit besteht ein Bedarf darin, die Konzepte und EntitĂ€ten zwischen den Wissensquellen anzugleichen. FrĂŒhere Arbeiten sind jedoch auf Datenquellen limitiert, die eine hohe Ähnlichkeit im Sinne von Struktur und Terminologie aufweisen. Das Ziel dieser Dissertation ist, Methoden fĂŒr Aufbau und Pflege zum Angleich zwischen multikulturellen Wissensquellen zu entwickeln. Der erste Beitrag ist ein System names ACROSS, das auf die Reduzierung der terminologischen Kluft zwischen den Datenquellen abzielt. Der zweite Beitrag sind die Systeme LILIANA und SESAME, welche zum Angleich eben dieser Datenquellen unter BerĂŒcksichtigung deren struktureller Unterschiede dienen. Der dritte Beitrag ist ein Verfahren names LAIKA, das den Angleich dynamischer Quellen unterstĂŒtzt. Unsere Experimente mit einer Reihe von Wissensquellen in GrĂ¶ĂŸenordnung des Web zeigen eine hohe QualitĂ€t unserer Verfahren. Zudem werden die Vorteile in der Verwendung von LILIANA und SESAME in Anwendungsszenarien fĂŒr Suche und Exploration dargelegt

    User-guided Disambiguation for Semantic-enriched Search

    Get PDF
    In this thesis, we develop, implement and evaluate a strategy for user-guided disambiguation for semantic-enriched search. We use the prototype system Capisco that combines some features of traditional text-based search engines and semantic search engines. Unlike many semantic search approaches that require formulating queries in quite complex semantic languages, the Capisco system is a keyword-based search engine. In this work, we focus on the user-guided disambiguation which helps to capture the users’ information needs. The user-guided disambiguation developed in this thesis explores the semantic meanings for a user-provided keyword and displays these to the users for selection. We explored three different orderings in which the possible keyword meanings may be shown. Using our newly-developed disambiguation interface, the user selects one or several of these meanings from the list offered. The selected semantic meanings are then transferred to the search engine to identify matching documents. This thesis provides an analysis of related work on search interfaces in semantic search, and in methods for semantic disambiguation. We developed the concept of user-guided disambiguation and implemented this concept in a prototype that explored three interface variations. We executed a user study on the three interface alternatives. Finally, we discuss the insights gained during this project, compare our approach to existing literature, and explore possible future work

    Low-cost semantic enhancement to digital library metadata and indexing: simple yet effective strategies

    Get PDF
    Most existing digital libraries use traditional lexically-based retrieval techniques. For established systems, completely replacing, or even making significant changes to the document retrieval mechanism (document analysis, indexing strategy, query processing and query interface) would require major technological effort, and would most likely be disruptive. In this paper, we describe ways to use the results of semantic analysis and disambiguation, while retaining an existing keyword-based search and lexicographic index. We engineer this so the output of semantic analysis (performed off-line) is suitable for import directly into existing digital library metadata and index structures, and thus incorporated without the need for architecture modifications

    mARC: Memory by Association and Reinforcement of Contexts

    Full text link
    This paper introduces the memory by Association and Reinforcement of Contexts (mARC). mARC is a novel data modeling technology rooted in the second quantization formulation of quantum mechanics. It is an all-purpose incremental and unsupervised data storage and retrieval system which can be applied to all types of signal or data, structured or unstructured, textual or not. mARC can be applied to a wide range of information clas-sification and retrieval problems like e-Discovery or contextual navigation. It can also for-mulated in the artificial life framework a.k.a Conway "Game Of Life" Theory. In contrast to Conway approach, the objects evolve in a massively multidimensional space. In order to start evaluating the potential of mARC we have built a mARC-based Internet search en-gine demonstrator with contextual functionality. We compare the behavior of the mARC demonstrator with Google search both in terms of performance and relevance. In the study we find that the mARC search engine demonstrator outperforms Google search by an order of magnitude in response time while providing more relevant results for some classes of queries
    • 

    corecore