18 research outputs found
Fixed Size Ordinally-Forgetting Encoding and its Applications
In this thesis, we propose the new Fixed-size Ordinally-Forgetting Encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. We address two fundamental problems in natural language processing, namely, Language Modeling (LM) and Named Entity Recognition (NER).
We have applied FOFE to FeedForward Neural Network Language Models (FFNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE-FFNN-LMs significantly outperform not only the standard fixed-input FFNN-LMs but also some popular Recurrent Neural Network Language Models (RNN-LMs).
Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on FOFE to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. This local detection approach has shown many advantages over the traditional sequence labeling methods. Our method has yielded pretty strong performance in all tasks we have examined
Recommended from our members
Explainable improved ensembling for natural language and vision
Ensemble methods are well-known in machine learning for improving prediction
accuracy. However, they do not adequately discriminate among underlying
component models. The measure of how good a model is can sometimes be estimated
from “why” it made a specific prediction. We propose a novel approach
called Stacking With Auxiliary Features (SWAF) that effectively leverages component
models by integrating such relevant information from context to improve
ensembling. Using auxiliary features, our algorithm learns to rely on systems that
not just agree on an output prediction but also the source or origin of that output.
We demonstrate our approach to challenging structured prediction problems
in Natural Language Processing and Vision including Information Extraction, Object
Detection, and Visual Question Answering. We also present a variant of SWAF
for combining systems that do not have training data in an unsupervised ensemble
with systems that do have training data. Our combined approach obtains a new
state-of-the-art, beating our prior performance on Information Extraction.
The state-of-the-art systems on many AI applications are ensembles of deeplearning
models. These models are hard to interpret and can sometimes make odd
mistakes. Explanations make AI systems more transparent and also justify their
predictions. We propose a scalable approach to generate visual explanations for
ensemble methods using the localization maps of the component systems. Crowdsourced
human evaluation on two new metrics indicates that our ensemble’s explanation
significantly qualitatively outperforms individual systems’ explanations.Computer Science
Aspects of Coherence for Entity Analysis
Natural language understanding is an important topic in natural language proces-
sing. Given a text, a computer program should, at the very least, be able to under-
stand what the text is about, and ideally also situate it in its extra-textual context
and understand what purpose it serves. What exactly it means to understand what a
text is about is an open question, but it is generally accepted that, at a minimum, un-
derstanding involves being able to answer questions like “Who did what to whom?
Where? When? How? And Why?”. Entity analysis, the computational analysis of
entities mentioned in a text, aims to support answering the questions “Who?” and
“Whom?” by identifying entities mentioned in a text. If the answers to “Where?”
and “When?” are specific, named locations and events, entity analysis can also pro-
vide these answers. Entity analysis aims to answer these questions by performing
entity linking, that is, linking mentions of entities to their corresponding entry in
a knowledge base, coreference resolution, that is, identifying all mentions in a text
that refer to the same entity, and entity typing, that is, assigning a label such as
Person to mentions of entities.
In this thesis, we study how different aspects of coherence can be exploited to
improve entity analysis. Our main contribution is a method that allows exploiting
knowledge-rich, specific aspects of coherence, namely geographic, temporal, and
entity type coherence. Geographic coherence expresses the intuition that entities
mentioned in a text tend to be geographically close. Similarly, temporal coherence
captures the intuition that entities mentioned in a text tend to be close in the tem-
poral dimension. Entity type coherence is based in the observation that in a text
about a certain topic, such as sports, the entities mentioned in it tend to have the
same or related entity types, such as sports team or athlete. We show how to integrate
features modeling these aspects of coherence into entity linking systems and esta-
blish their utility in extensive experiments covering different datasets and systems.
Since entity linking often requires computationally expensive joint, global optimi-
zation, we propose a simple, but effective rule-based approach that enjoys some of
the benefits of joint, global approaches, while avoiding some of their drawbacks.
To enable convenient error analysis for system developers, we introduce a tool for
visual analysis of entity linking system output. Investigating another aspect of co-
herence, namely the coherence between a predicate and its arguments, we devise a
distributed model of selectional preferences and assess its impact on a neural core-
ference resolution system. Our final contribution examines how multilingual entity
typing can be improved by incorporating subword information. We train and make
publicly available subword embeddings in 275 languages and show their utility in
a multilingual entity typing tas
Exploiting Cross-Lingual Representations For Natural Language Processing
Traditional approaches to supervised learning require a generous amount of labeled data for good generalization. While such annotation-heavy approaches have proven useful for some Natural Language Processing (NLP) tasks in high-resource languages (like English), they are unlikely to scale to languages where collecting labeled data is di cult and time-consuming. Translating supervision available in English is also not a viable solution, because developing a good machine translation system requires expensive to annotate resources which are not available for most languages.
In this thesis, I argue that cross-lingual representations are an effective means of extending NLP tools to languages beyond English without resorting to generous amounts of annotated data or expensive machine translation. These representations can be learned in an inexpensive manner, often from signals completely unrelated to the task of interest. I begin with a review of different ways of inducing such representations using a variety of cross-lingual signals and study algorithmic approaches of using them in a diverse set of downstream tasks. Examples of such tasks covered in this thesis include learning representations to transfer a trained model across languages for document classification, assist in monolingual lexical semantics like word sense induction, identify asymmetric lexical relationships like hypernymy between words in different languages, or combining supervision across languages through a shared feature space for cross-lingual entity linking. In all these applications, the representations make information expressed in other languages available in English, while requiring minimal additional supervision in the language of interest
Thinking outside the graph: scholarly knowledge graph construction leveraging natural language processing
Despite improved digital access to scholarly knowledge in recent decades, scholarly communication remains exclusively document-based.
The document-oriented workflows in science publication have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis.
In this form, scientific knowledge remains locked in representations that are inadequate for machine processing.
As long as scholarly communication remains in this form, we cannot take advantage of all the advancements taking place in machine learning and natural language processing techniques.
Such techniques would facilitate the transformation from pure text based into (semi-)structured semantic descriptions that are interlinked in a collection of big federated graphs.
We are in dire need for a new age of semantically enabled infrastructure adept at storing, manipulating, and querying scholarly knowledge.
Equally important is a suite of machine assistance tools designed to populate, curate, and explore the resulting scholarly knowledge graph.
In this thesis, we address the issue of constructing a scholarly knowledge graph using natural language processing techniques.
First, we tackle the issue of developing a scholarly knowledge graph for structured scholarly communication, that can be populated and constructed automatically.
We co-design and co-implement the Open Research Knowledge Graph (ORKG), an infrastructure capable of modeling, storing, and automatically curating scholarly communications.
Then, we propose a method to automatically extract information into knowledge graphs.
With Plumber, we create a framework to dynamically compose open information extraction pipelines based on the input text.
Such pipelines are composed from community-created information extraction components in an effort to consolidate individual research contributions under one umbrella.
We further present MORTY as a more targeted approach that leverages automatic text summarization to create from the scholarly article's text structured summaries containing all required information.
In contrast to the pipeline approach, MORTY only extracts the information it is instructed to, making it a more valuable tool for various curation and contribution use cases.
Moreover, we study the problem of knowledge graph completion.
exBERT is able to perform knowledge graph completion tasks such as relation and entity prediction tasks on scholarly knowledge graphs by means of textual triple classification.
Lastly, we use the structured descriptions collected from manual and automated sources alike with a question answering approach that builds on the machine-actionable descriptions in the ORKG.
We propose JarvisQA, a question answering interface operating on tabular views of scholarly knowledge graphs i.e., ORKG comparisons.
JarvisQA is able to answer a variety of natural language questions, and retrieve complex answers on pre-selected sub-graphs.
These contributions are key in the broader agenda of studying the feasibility of natural language processing methods on scholarly knowledge graphs, and lays the foundation of which methods can be used on which cases.
Our work indicates what are the challenges and issues with automatically constructing scholarly knowledge graphs, and opens up future research directions
Improving Cross-Lingual Transfer Learning for Event Detection
The widespread adoption of applications powered by Artificial Intelligence (AI) backbones has unquestionably changed the way we interact with the world around us. Applications such as automated personal assistants, automatic question answering, and machine-based translation systems have become mainstays of modern culture thanks to the recent considerable advances in Natural Language Processing (NLP) research. Nonetheless, with over 7000 spoken languages in the world, there still remain a considerable number of marginalized communities that are unable to benefit from these technological advancements largely due to the language they speak. Cross-Lingual Learning (CLL) looks to address this issue by transferring the knowledge acquired from a popular, high-resource source language (e.g., English, Chinese, or Spanish) to a less favored, lower-resourced target language (e.g., Urdu or Swahili). This dissertation leverages the Event Detection (ED) sub-task of Information Extraction (IE) as a testbed and presents three novel approaches that improve cross-lingual transfer learning from distinct perspectives: (1) direct knowledge transfer, (2) hybrid knowledge transfer, and (3) few-shot learning