7 research outputs found
Towards Population of Knowledge Bases from Conversational Sources
With an increasing amount of data created daily, it is challenging for users to organize and discover information from massive collections of digital content (e.g., text and speech). The population of knowledge bases requires linking information from unstructured sources (e.g., news articles and web pages) to structured external knowledge bases (e.g., Wikipedia), which has the potential to advance information archiving and access, and to support knowledge discovery and reasoning. Because of the complexity of this task, knowledge base population is composed of multiple sub-tasks, including the entity linking task, defined as linking the mention of entities (e.g., persons, organizations, and locations) found in documents to their referents in external knowledge bases and the event task, defined as extracting related information for events that should be entered in the knowledge base.
Most prior work on tasks related to knowledge base population has focused on dissemination-oriented sources written in the third person (e.g., new articles) that benefit from two characteristics: the content is written in formal language and is to some degree self-contextualized, and the entities mentioned (e.g., persons) are likely to be widely known to the public so that rich information can be found from existing general knowledge bases (e.g., Wikipedia and DBpedia). The work proposed in this thesis focuses on tasks related to knowledge base population for conversational sources written in the first person (e.g., emails and phone recordings), which offers new challenges. One challenge is that most conversations (e.g., 68% of the person names and 53% of the organization names in Enron emails) refer to entities that are known to the conversational participants but not widely known. Thus, existing entity linking techniques relying on general knowledge bases are not appropriate. Another challenge is that some of the shared context between participants in first-person conversations may be implicit and thus challenging to model, increasing the difficulty, even for human annotators, of identifying the true referents.
This thesis focuses on several tasks relating to the population of knowledge bases for conversational content: the population of collection-specific knowledge bases for organization entities and meetings from email collections; the entity linking task that resolves the mention of three types of entities (person, organization, and location) found in both conversational text (emails) and speech (phone recordings) sources to multiple knowledge bases, including a general knowledge base built from Wikipedia and collection-specific knowledge bases; the meeting linking task that links meeting-related email messages to the referenced meeting entries in the collection-specific meeting knowledge base; and speaker identification techniques to improve the entity linking task for phone recordings without known speakers. Following the model-based evaluation paradigm, three collections (namely, Enron emails, Avocado emails, and Enron phone recordings) are used as the representations of conversational sources, new test collections are created for each task, and experiments are conducted for each task to evaluate the efficacy of the proposed methods and to provide a comparison to existing state-of-the-art systems. This work has implications in the research fields of e-discovery, scientific collaboration, speaker identification, speech retrieval, and privacy protection
Enhancing knowledge acquisition systems with user generated and crowdsourced resources
This thesis is on leveraging knowledge acquisition systems with collaborative data and
crowdsourcing work from internet. We propose two strategies and apply them for building
effective entity linking and question answering (QA) systems.
The first strategy is on integrating an information extraction system with online collaborative
knowledge bases, such as Wikipedia and Freebase. We construct a Cross-Lingual Entity
Linking (CLEL) system to connect Chinese entities, such as people and locations, with
corresponding English pages in Wikipedia.
The main focus is to break the language barrier between Chinese entities and the English
KB, and to resolve the synonymy and polysemy of Chinese entities. To address those
problems, we create a cross-lingual taxonomy and a Chinese knowledge base (KB). We
investigate two methods of connecting the query representation with the KB representation.
Based on our CLEL system participating in TAC KBP 2011 evaluation, we finally propose
a simple and effective generative model, which achieved much better performance.
The second strategy is on creating annotation for QA systems with the help of crowd-
sourcing. Crowdsourcing is to distribute a task via internet and recruit a lot of people to
complete it simultaneously. Various annotated data are required to train the data-driven
statistical machine learning algorithms for underlying components in our QA system. This
thesis demonstrates how to convert the annotation task into crowdsourcing micro-tasks,
investigate different statistical methods for enhancing the quality of crowdsourced anno-
tation, and ïŹnally use enhanced annotation to train learning to rank models for passage
ranking algorithms for QA.Gegenstand dieser Arbeit ist das Nutzbarmachen sowohl von Systemen zur Wissener-
fassung als auch von kollaborativ erstellten Daten und Arbeit aus dem Internet. Es
werden zwei Strategien vorgeschlagen, welche fĂŒr die Erstellung effektiver Entity Linking
(Disambiguierung von EntitÀtennamen) und Frage-Antwort Systeme eingesetzt werden.
Die erste Strategie ist, ein Informationsextraktions-System mit kollaborativ erstellten Online-
Datenbanken zu integrieren. Wir entwickeln ein Cross-Linguales Entity Linking-System
(CLEL), um chinesische EntitÀten, wie etwa Personen und Orte, mit den entsprechenden
Wikipediaseiten zu verknĂŒpfen.
Das Hauptaugenmerk ist es, die Sprachbarriere zwischen chinesischen EntitÀten und
englischer Datenbank zu durchbrechen, und Synonymie und Polysemie der chinesis-
chen EntitÀten aufzulösen. Um diese Probleme anzugehen, erstellen wir eine cross
linguale Taxonomie und eine chinesische Datenbank. Wir untersuchen zwei Methoden,
die ReprÀsentation der Anfrage und die ReprÀsentation der Datenbank zu verbinden.
SchlieĂlich stellen wir ein einfaches und effektives generatives Modell vor, das auf unserem
System fĂŒr die Teilnahme an der TAC KBP 2011 Evaluation basiert und eine erheblich
bessere Performanz erreichte.
Die zweite Strategie ist, Annotationen fĂŒr Frage-Antwort-Systeme mit Hilfe von "Crowd-
sourcing" zu erstellen. "Crowdsourcing" bedeutet, eine Aufgabe via Internet an eine
groĂe Menge an angeworbene Menschen zu verteilen, die diese simultan erledigen.
Verschiedene annotierte Daten sind notwendig, um die datengetriebenen statistischen
Lernalgorithmen zu trainieren, die unserem Frage-Antwort System zugrunde liegen. Wir
zeigen, wie die Annotationsaufgabe in Mikro-Aufgaben fĂŒr das Crowdsourcing umgewan-
delt werden kann, wir untersuchen verschiedene statistische Methoden, um die QualitÀt
der Annotation aus dem Crowdsourcing zu erweitern, und schlieĂlich nutzen wir die erwei-
erte Annotation, um Modelle zum Lernen von Ranglisten von Textabschnitten zu trainieren
Linking named entities to Wikipedia
Natural language is fraught with problems of ambiguity, including name reference. A name in text can refer to multiple entities just as an entity can be known by different names. This thesis examines how a mention in text can be linked to an external knowledge base (KB), in our case, Wikipedia. The named entity linking (NEL) task requires systems to identify the KB entry, or Wikipedia article, that a mention refers to; or, if the KB does not contain the correct entry, return NIL. Entity linking systems can be complex and we present a framework for analysing their different components, which we use to analyse three seminal systems which are evaluated on a common dataset and we show the importance of precise search for linking. The Text Analysis Conference (TAC) is a major venue for NEL research. We report on our submissions to the entity linking shared task in 2010, 2011 and 2012. The information required to disambiguate entities is often found in the text, close to the mention. We explore apposition, a common way for authors to provide information about entities. We model syntactic and semantic restrictions with a joint model that achieves state-of-the-art apposition extraction performance. We generalise from apposition to examine local descriptions specified close to the mention. We add local description to our state-of-the-art linker by using patterns to extract the descriptions and matching against this restricted context. Not only does this make for a more precise match, we are also able to model failure to match. Local descriptions help disambiguate entities, further improving our state-of-the-art linker. The work in this thesis seeks to link textual entity mentions to knowledge bases. Linking is important for any task where external world knowledge is used and resolving ambiguity is fundamental to advancing research into these problems
Recommended from our members
Entity-based Enrichment for Information Extraction and Retrieval
The goal of this work is to leverage cross-document entity relationships for improved understanding of queries and documents. We define an entity to be a thing or concept that exists in the world, such as a politician, a battle, a film, or a color. Entity-based enrichment (EBE) is a new expansion model for both queries and documents using features from similar entitymentions in the document collection and external knowledge resources. It uses task-specific features from entities beyond words that include: name aliases, fine-grained entity types, categories, and relationships to other entities. EBE addresses the problem of sparse or noisy local evidence due to multiple topics, implicit context, or informal writing. With the ultimate goal of improving information retrieval effectiveness, we start from unstructured text and through information extraction build up rich entity-based representations linked to external knowledge resources. We study the application ofentity-based enrichment to each step in the pipeline: 1) Named entity recognition, 2) Entity linking, and 3) Ad hoc document retrieval. The empirical results for EBE in each of these tasks shows significant improvements. Our first application of entity-based enrichment is the task of detecting entities in named entity recognition. We enrich the representation of observed words likely to represent entities. We perform weighted feature copying of recognition features from similar tokens in the corpus and external collections. The evaluation shows statistically significant improvements on in-domain newswire accuracy and demonstrates that the models are more robust on out-of-domain data. In the second part of this work, we apply EBE to the task of entity linking. The proposed entity linking method for disambiguating the detected mentions to entries in an external knowledge base is based on information retrieval. Theneighborhood relevance model, an enrichment model, identifies salient associations between an entity mention and otherentity mentions in the document. The results show that the enrichment model outperforms other context models and results in a system that is competitive with leading methods. Using the constructed entity representation, the final task is ad hoc document retrieval. We enrich the query representation with entity features. Retrieval is performed over documents annotated with entities linked to Wikipedia and Freebase from our system as well as the publicly available Google FACC1 annotations. To effectively leverage linked entity features, we extend dependency-based retrieval models to include structured attributes. We also define a new query-specific entity context model which builds a model for disambiguated entities from retrieved documents. Our results demonstrate significant improvements over competitive query expansion baselines for several standard test collections
Report Linking: Information Extraction for Building Topical Knowledge Bases
Human language artifacts represent a plentiful source of rich, unstructured information created by reporters, scientists, and analysts. In this thesis we provide approaches for adding structure: extracting and linking entities, events, and relationships from a collection of documents about a common topic. We pursue this linking at two levels of abstraction. At the document level we propose models for aligning the entities and events described in coherent and related discourses: these models are useful for deduplicating repeated claims, finding implicit arguments to events, and measuring semantic overlap between documents. Then at a higher level of abstraction, we construct knowledge graphs containing salient entities and relations linked to supporting documents: these graphs can be augmented with facts and summaries to give users a structured understanding of the information in a large collection
Slot Filling
Slot filling (SF) is the task of automatically extracting facts about particular entities from unstructured text, and populating a knowledge base (KB) with these facts. These structured KBs enable applications such as structured web queries and question answering. SF is typically framed as a query-oriented setting of the related task of relation extraction. Throughout this thesis, we reflect on how SF is a task with many distinct problems. We demonstrate that recall is a major limiter on SF system performance. We contribute an analysis of typical SF recall loss, and find a substantial amount of loss occurs early in the SF pipeline. We confirm that accurate NER and coreference resolution are required for high-recall SF. We measure upper bounds using a naĂŻve graph-based semi-supervised bootstrapping technique, and find that only 39% of results are reachable using a typical feature space. We expect that this graph-based technique will be directly useful for extraction, and this leads us to frame SF as a label propagation task. We focus on a detailed graph representation of the task which reflects the behaviour and assumptions we want to model based on our analysis, including modifying the label propagation process to model multiple types of label interaction. Analysing the graph, we find that a large number of errors occur in very close proximity to training data, and identify that this is of major concern for propagation. While there are some conflicts caused by a lack of sufficient disambiguating contextâwe explore adding additional contextual features to address thisâmany of these conflicts are caused by subtle annotation problems. We find that lack of a standard for how explicit expressions of relations must be in text makes consistent annotation difficult. Using a strict definition of explicitness results in 20% of correct annotations being removed from a standard dataset. We contribute several annotation-driven analyses of this problem, exploring the definition of slots and the effect of the lack of a concrete definition of explicitness: annotation schema do not detail how explicit expressions of relations need to be, and there is large scope for disagreement between annotators. Additionally, applications may require relatively strict or relaxed evidence for extractions, but this is not considered in annotation tasks. We demonstrate that annotators frequently disagree on instances, dependent on differences in annotator world knowledge and thresholds on making probabilistic inference. SF is fundamental to enabling many knowledge-based applications, and this work motivates modelling and evaluating SF to better target these tasks