240 research outputs found
Improving Cross-Lingual Transfer Learning for Event Detection
The widespread adoption of applications powered by Artificial Intelligence (AI) backbones has unquestionably changed the way we interact with the world around us. Applications such as automated personal assistants, automatic question answering, and machine-based translation systems have become mainstays of modern culture thanks to the recent considerable advances in Natural Language Processing (NLP) research. Nonetheless, with over 7000 spoken languages in the world, there still remain a considerable number of marginalized communities that are unable to benefit from these technological advancements largely due to the language they speak. Cross-Lingual Learning (CLL) looks to address this issue by transferring the knowledge acquired from a popular, high-resource source language (e.g., English, Chinese, or Spanish) to a less favored, lower-resourced target language (e.g., Urdu or Swahili). This dissertation leverages the Event Detection (ED) sub-task of Information Extraction (IE) as a testbed and presents three novel approaches that improve cross-lingual transfer learning from distinct perspectives: (1) direct knowledge transfer, (2) hybrid knowledge transfer, and (3) few-shot learning
Neural Concept-to-text Generation with Knowledge Graphs
Modern language models are strong at generating grammatically correct, natural lan- guage. However, they still struggle with commonsense reasoning - a task involving making inferences about common everyday situations without explicitly stated informa- tion. Prior research into the topic has shown that providing additional information from external sources helps language models generate better outputs. In this thesis, we explore methods of extracting information from knowledge graphs and using it as additional input for a pre-trained generative language model. We do this by either extracting a subgraph relevant to the context or by using graph neural networks to predict which information is relevant. Moreover, we experiment with a post-editing approach and with a model trained in a multi-task setup (generation and consistency classification). Our methods are evaluated on the CommonGen benchmark for generative commonsense reasoning using both automatic metrics and a detailed error analysis on a small sample of outputs. We show that the methods improve over a simple language model fine-tuning baseline, although they do not set a new state of the art. 1Moderní jazykové modely jsou schopné generovat gramaticky správný, přirozený ja- zyk. Stále však mají potíže s commonsense reasoningem, což je úkol zahrnující vyvozování závěrů o běžných každodenních situacích bez explicitně uvedených informací. Předchozí výzkum tohoto tématu ukázal, že poskytnutí dodatečných informací z externích zdrojů pomáhá jazykovým modelům generovat lepší výstupy. V této práci zkoumáme metody získávání informací ze znalostních grafů a jejich využití jako dodatečného vstupu pro předem natrénovaný generativní jazykový model. Děláme to buď extrakcí podgrafu rele- vantního pro kontext, nebo pomocí grafových neuronových sítí, které předpovídají, které informace jsou relevantní. Kromě toho experimentujeme s post-editačním přístupem a s modelem natrénovaným ve víceúlohovém setupu (generování a klasifikace konzistence). Naše metody jsou hodnoceny na benchmarku CommonGen pro generativní common- sense reasoning s využitím automatických metrik i podrobné analýzy chyb na malém vzorku výstupů. Ukazujeme, že metody se zlepšují ve srovnání s jednoduchým přístu- pem spočívajícím ve vyladění jazykového modelu, ačkoli nepřekonávají nejlepší současné modely. 1Institute of Formal and Applied LinguisticsÚstav formální a aplikované lingvistikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult
MAP's not dead yet: Uncovering true language model modes by conditioning away degeneracy
It has been widely observed that exact or approximate MAP (mode-seeking)
decoding from natural language generation (NLG) models consistently leads to
degenerate outputs (Stahlberg and Byrne, 2019, Holtzman et al., 2019). This has
generally been attributed to either a fundamental inadequacy of modes in models
or weaknesses in language modeling. Contrastingly in this work, we emphasize
that degenerate modes can even occur in the absence of any model error, due to
contamination of the training data. Specifically, we show that mixing even a
tiny amount of low-entropy noise with a population text distribution can cause
the data distribution's mode to become degenerate, implying that any models
trained on it will be as well. As the unconditional mode of NLG models will
often be degenerate, we therefore propose to apply MAP decoding to the model's
distribution conditional on avoiding specific degeneracies. Using exact-search,
we empirically verify that the length-conditional modes of machine translation
models and language models are indeed more fluent and topical than their
unconditional modes. For the first time, we also share many examples of exact
modal sequences from these models, and from several variants of the LLaMA-7B
model. Notably, the modes of the LLaMA models are still degenerate, showing
that improvements in modeling have not fixed this issue. Because of the cost of
exact mode finding algorithms, we develop an approximate mode finding approach,
ACBS, which finds sequences that are both high-likelihood and high-quality. We
apply this approach to LLaMA-7B, a model which was not trained for instruction
following, and find that we are able to elicit reasonable outputs without any
finetuning.Comment: 49 pages, 3 figure
Document-Level Language Models for Machine Translation
Despite the known limitations, most machine translation systems today still
operate on the sentence-level. One reason for this is, that most parallel
training data is only sentence-level aligned, without document-level meta
information available. In this work, we set out to build context-aware
translation systems utilizing document-level monolingual data instead. This can
be achieved by combining any existing sentence-level translation model with a
document-level language model. We improve existing approaches by leveraging
recent advancements in model combination. Additionally, we propose novel
weighting techniques that make the system combination more flexible and
significantly reduce computational overhead. In a comprehensive evaluation on
four diverse translation tasks, we show that our extensions improve
document-targeted scores substantially and are also computationally more
efficient. However, we also find that in most scenarios, back-translation gives
even better results, at the cost of having to re-train the translation system.
Finally, we explore language model fusion in the light of recent advancements
in large language models. Our findings suggest that there might be strong
potential in utilizing large language models via model combination.Comment: accepted at WMT 202
Controlling Styles in Neural Machine Translation with Activation Prompt
Controlling styles in neural machine translation (NMT) has attracted wide
attention, as it is crucial for enhancing user experience. Earlier studies on
this topic typically concentrate on regulating the level of formality and
achieve some progress in this area. However, they still encounter two major
challenges. The first is the difficulty in style evaluation. The style
comprises various aspects such as lexis, syntax, and others that provide
abundant information. Nevertheless, only formality has been thoroughly
investigated. The second challenge involves excessive dependence on incremental
adjustments, particularly when new styles are necessary. To address both
challenges, this paper presents a new benchmark and approach. A multiway
stylized machine translation (MSMT) benchmark is introduced, incorporating
diverse categories of styles across four linguistic domains. Then, we propose a
method named style activation prompt (StyleAP) by retrieving prompts from
stylized monolingual corpus, which does not require extra fine-tuning.
Experiments show that StyleAP could effectively control the style of
translation and achieve remarkable performance.Comment: Accepted by Findings of ACL 2023; The code is available at
https://github.com/IvanWang0730/StyleA
A distributional investigation of German verbs
Diese Dissertation bietet eine empirische Untersuchung deutscher Verben auf der Grundlage statistischer Beschreibungen, die aus einem großen deutschen Textkorpus gewonnen wurden. In einem kurzen Überblick über linguistische Theorien zur lexikalischen Semantik von Verben skizziere ich die Idee, dass die Verbbedeutung wesentlich von seiner Argumentstruktur (der Anzahl und Art der Argumente, die zusammen mit dem Verb auftreten) und seiner Aspektstruktur (Eigenschaften, die den zeitlichen Ablauf des vom Verb denotierten Ereignisses bestimmen) abhängt. Anschließend erstelle ich statistische Beschreibungen von Verben, die auf diesen beiden unterschiedlichen Bedeutungsfacetten basieren. Insbesondere untersuche ich verbale Subkategorisierung, Selektionspräferenzen und Aspekt. Alle diese Modellierungsstrategien werden anhand einer gemeinsamen Aufgabe, der Verbklassifikation, bewertet. Ich zeige, dass im Rahmen von maschinellem Lernen erworbene Merkmale, die verbale lexikalische Aspekte erfassen, für eine Anwendung von Vorteil sind, die Argumentstrukturen betrifft, nämlich semantische Rollenkennzeichnung. Darüber hinaus zeige ich, dass Merkmale, die die verbale Argumentstruktur erfassen, bei der Aufgabe, ein Verb nach seiner Aspektklasse zu klassifizieren, gut funktionieren. Diese Ergebnisse bestätigen, dass diese beiden Facetten der Verbbedeutung auf grundsätzliche Weise zusammenhängen.This dissertation provides an empirical investigation of German verbs conducted on the basis of statistical descriptions acquired from a large corpus of German text. In a brief overview of the linguistic theory pertaining to the lexical semantics of verbs, I outline the idea that verb meaning is composed of argument structure (the number and types of arguments that co-occur with a verb) and aspectual structure (properties describing the temporal progression of an event referenced by the verb). I then produce statistical descriptions of verbs according to these two distinct facets of meaning: In particular, I examine verbal subcategorisation, selectional preferences, and aspectual type. All three of these modelling strategies are evaluated on a common task, automatic verb classification. I demonstrate that automatically acquired features capturing verbal lexical aspect are beneficial for an application that concerns argument structure, namely semantic role labelling. Furthermore, I demonstrate that features capturing verbal argument structure perform well on the task of classifying a verb for its aspectual type. These findings suggest that these two facets of verb meaning are related in an underlying way
Citance-Contextualized Summarization of Scientific Papers
Current approaches to automatic summarization of scientific papers generate
informative summaries in the form of abstracts. However, abstracts are not
intended to show the relationship between a paper and the references cited in
it. We propose a new contextualized summarization approach that can generate an
informative summary conditioned on a given sentence containing the citation of
a reference (a so-called "citance"). This summary outlines the content of the
cited paper relevant to the citation location. Thus, our approach extracts and
models the citances of a paper, retrieves relevant passages from cited papers,
and generates abstractive summaries tailored to each citance. We evaluate our
approach using , a new dataset containing
540K~computer science papers and 4.6M~citances therein.Comment: Accepted at EMNLP 2023 Finding
- …