919 research outputs found
Data-driven sentence simplification: Survey and benchmark
Sentence Simplification (SS) aims to modify a sentence in order to make it easier to read and understand. In order to do so, several rewriting transformations can be performed such as replacement, reordering, and splitting. Executing these transformations while keeping sentences grammatical, preserving their main idea, and generating simpler output, is a challenging and still far from solved problem. In this article, we survey research on SS, focusing on approaches that attempt to learn how to simplify using corpora of aligned original-simplified sentence pairs in English, which is the dominant paradigm nowadays. We also include a benchmark of different approaches on common datasets so as to compare them and highlight their strengths and limitations. We expect that this survey will serve as a starting point for researchers interested in the task and help spark new ideas for future developments
ćčłæăȘăłăŒăăčăçšăăȘăăăăčăćčłæć
éŠéœć€§ćŠæ±äșŹ, 2018-03-25, ć棫ïŒć·„ćŠïŒéŠéœć€§ćŠæ±
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Learning from Noisy Data in Statistical Machine Translation
In dieser Arbeit wurden Methoden entwickelt, die in der Lage sind die negativen
Effekte von verrauschten Daten in SMT Systemen zu senken und dadurch die Leistung des
Systems zu steigern. Hierbei wird das Problem in zwei verschiedenen Schritten des
Lernprozesses behandelt: Bei der Vorverarbeitung und wÀhrend der
Modellierung. Bei der Vorverarbeitung werden zwei Methoden zur Verbesserung der
statistischen Modelle durch die Erhöhung der QualitÀt von Trainingsdaten entwickelt.
Bei der Modellierung werden verschiedene Möglichkeiten vorgestellt, um Daten nach ihrer NĂŒtzlichkeit zu gewichten.
ZunÀchst wird der Effekt des Entfernens von False-Positives vom Parallel Corpus
gezeigt. Ein Parallel Corpus besteht aus einem Text in zwei Sprachen,
wobei jeder Satz einer Sprache mit dem entsprechenden Satz der
anderen Sprache gepaart ist. Hierbei wird vorausgesetzt, dass die Anzahl
der SĂ€tzen in beiden Sprachversionen gleich ist. False-Positives in diesem
Sinne sind Satzpaare, die im Parallel Corpus gepaart sind aber keine Ăbersetzung voneinander sind.
Um diese zu erkennen wird ein kleiner und fehlerfreier
paralleler Corpus (Clean Corpus) vorausgesetzt. Mit Hilfe verschiedenen
lexikalischen Eigenschaften werden zuverlÀssig False-Positives vor der
Modellierungsphase gefiltert. Eine wichtige lexikalische Eigenschaft hierbei
ist das vom Clean Corpus erzeugte bilinguale Lexikon.
In der Extraktion dieses bilingualen Lexikons werden verschiedene Heuristiken implementiert, die zu einer verbesserten Leistung fĂŒhren.
Danach betrachten wir das Problem vom Extrahieren der nĂŒtzlichsten Teile der Trainingsdaten.
Dabei ordnen wir die Daten basierend auf ihren Bezug zur Zieldomaine.
Dies geschieht unter der Annahme der Existenz eines guten reprÀsentativen Tuning Datensatzes.
Da solche Tuning Daten typischerweise beschrĂ€nkte GröĂe haben,
werden WortÀhnlichkeiten benutzt um die Abdeckung der Tuning Daten zu erweitern.
Die im vorherigen Schritt verwendeten WortĂ€hnlichkeiten sind entscheidend fĂŒr
die QualitÀt des Verfahrens. Aus diesem Grund werden in der Arbeit verschiedene
automatische Methoden zur Ermittlung von solche WortÀhnlichkeiten ausgehend von
monoligual und biligual Corpora vorgestellt. Interessanterweise ist dies auch
bei beschrÀnkten Daten möglich, indem auch monolinguale
Daten, die in groĂen Mengen zur VerfĂŒgung stehen, zur Ermittlung der
WortĂ€hnlichkeit herangezogen werden. Bei bilingualen Daten, die hĂ€ufig nur in beschrĂ€nkter GröĂe zur
VerfĂŒgung stehen, können auch weitere Sprachpaare herangezogen werden, die mindestens eine Sprache mit dem
vorgegebenen Sprachpaar teilen.
Im Modellierungsschritt behandeln wir das Problem mit verrauschten Daten, indem die
Trainingsdaten anhand der GĂŒte des Corpus gewichtet werden.
Wir benutzen Statistik signifikante MessgröĂen, um die weniger verlĂ€sslichen
Sequenzen zu finden und ihre Gewichtung zu reduzieren.
Ăhnlich zu den vorherigen AnsĂ€tzen, werden WortĂ€hnlichkeiten benutzt um das Problem bei begrenzten Daten zu behandeln.
Ein weiteres Problem tritt allerdings auf sobald die absolute HĂ€ufigkeiten mit den gewichteten HĂ€ufigkeiten ersetzt werden. In dieser Arbeit werden hierfĂŒr Techniken zur GlĂ€ttung der Wahrscheinlichkeiten in dieser Situation entwickelt.
Die GröĂe der Trainingsdaten werden problematisch sobald man mit Corpora von erheblichem Volumen arbeitet.
Hierbei treten zwei Hauptschwierigkeiten auf: Die LĂ€nge der Trainingszeit und der begrenzte Arbeitsspeicher.
FĂŒr das Problem der Trainingszeit wird ein Algorithmus entwickelt, der die rechenaufwendigen Berechnungen auf mehrere Prozessoren mit gemeinsamem Speicher ausfĂŒhrt.
FĂŒr das Speicherproblem werden speziale Datenstrukturen und Algorithmen fĂŒr externe Speicher benutzt.
Dies erlaubt ein effizientes Training von extrem groĂen Modellne in Hardware mit begrenztem Speicher
Coherence in Machine Translation
Coherence ensures individual sentences work together to form a meaningful document. When properly translated, a coherent document in one language should result in a coherent document in another language. In Machine Translation, however, due to reasons of modeling and computational complexity, sentences are pieced together from words or phrases based on short context windows and
with no access to extra-sentential context.
In this thesis I propose ways to automatically assess the coherence of machine translation output. The work is structured around three dimensions: entity-based coherence, coherence as evidenced via syntactic patterns, and coherence as
evidenced via discourse relations.
For the first time, I evaluate existing monolingual coherence models on this new task, identifying issues and challenges that are specific to the machine translation setting. In order to address these issues, I adapted a state-of-the-art syntax
model, which also resulted in improved performance for the monolingual task. The results clearly indicate how much more difficult the new task is than the task of detecting shuffled texts. I proposed a new coherence model, exploring the crosslingual transfer of discourse relations in machine translation. This model is novel in that it measures the correctness of the discourse relation by comparison to the source text rather than to a reference translation. I identified patterns of incoherence common across different language pairs, and created a corpus of machine translated output annotated with coherence errors for evaluation purposes. I then examined
lexical coherence in a multilingual context, as a preliminary study for crosslingual transfer. Finally, I determine how the new and adapted models correlate with human judgements of translation quality and suggest that improvements in general evaluation within machine translation would benefit from having a coherence component that evaluated the translation output with respect to the source text
Recommended from our members
Adapting Automatic Summarization to New Sources of Information
English-language news articles are no longer necessarily the best source of information. The Web allows information to spread more quickly and travel farther: first-person accounts of breaking news events pop up on social media, and foreign-language news articles are accessible to, if not immediately understandable by, English-speaking users. This thesis focuses on developing automatic summarization techniques for these new sources of information.
We focus on summarizing two specific new sources of information: personal narratives, first-person accounts of exciting or unusual events that are readily found in blog entries and other social media posts, and non-English documents, which must first be translated into English, often introducing translation errors that complicate the summarization process. Personal narratives are a very new area of interest in natural language processing research, and they present two key challenges for summarization. First, unlike many news articles, whose lead sentences serve as summaries of the most important ideas in the articles, personal narratives provide no such shortcuts for determining where important information occurs in within them; second, personal narratives are written informally and colloquially, and unlike news articles, they are rarely edited, so they require heavier editing and rewriting during the summarization process. Non-English documents, whether news or narrative, present yet another source of difficulty on top of any challenges inherent to their genre: they must be translated into English, potentially introducing translation errors and disfluencies that must be identified and corrected during summarization.
The bulk of this thesis is dedicated to addressing the challenges of summarizing personal narratives found on the Web. We develop a two-stage summarization system for personal narrative that first extracts sentences containing important content and then rewrites those sentences into summary-appropriate forms. Our content extraction system is inspired by contextualist narrative theory, using changes in writing style throughout a narrative to detect sentences containing important information; it outperforms both graph-based and neural network approaches to sentence extraction for this genre. Our paraphrasing system rewrites the extracted sentences into shorter, standalone summary sentences, learning to mimic the paraphrasing choices of human summarizers more closely than can traditional lexicon- or translation-based paraphrasing approaches.
We conclude with a chapter dedicated to summarizing non-English documents written in low-resource languages â documents that would otherwise be unreadable for English-speaking users. We develop a cross-lingual summarization system that performs even heavier editing and rewriting than does our personal narrative paraphrasing system; we create and train on large amounts of synthetic errorful translations of foreign-language documents. Our approach produces fluent English summaries from disdisfluent translations of non-English documents, and it generalizes across languages
The Circle of Meaning: From Translation to Paraphrasing and Back
The preservation of meaning between inputs and outputs is perhaps
the most ambitious and, often, the most elusive goal of systems
that attempt to process natural language. Nowhere is this goal of
more obvious importance than for the tasks of machine translation
and paraphrase generation. Preserving meaning between the input and
the output is paramount for both, the monolingual vs bilingual distinction
notwithstanding. In this thesis, I present a novel, symbiotic relationship
between these two tasks that I term the "circle of meaning''.
Today's statistical machine translation (SMT) systems require high
quality human translations for parameter tuning, in addition to
large bi-texts for learning the translation units. This parameter
tuning usually involves generating translations at different points
in the parameter space and obtaining feedback against human-authored
reference translations as to how good the translations. This feedback
then dictates what point in the parameter space should be explored
next. To measure this feedback, it is generally considered wise to have
multiple (usually 4) reference translations to avoid unfair penalization of translation
hypotheses which could easily happen given the large number of ways in which
a sentence can be translated from one language to another. However, this reliance on multiple reference translations
creates a problem since they are labor intensive and expensive to obtain.
Therefore, most current MT datasets only contain a single reference.
This leads to the problem of reference sparsity---the primary open problem
that I address in this dissertation---one that has a serious effect on the
SMT parameter tuning process.
Bannard and Callison-Burch (2005) were the first to provide a practical
connection between phrase-based statistical machine translation and paraphrase
generation. However, their technique is restricted to generating phrasal
paraphrases. I build upon their approach and augment a phrasal paraphrase
extractor into a sentential paraphraser with extremely broad coverage.
The novelty in this augmentation lies in the further strengthening of
the connection between statistical machine translation and paraphrase
generation; whereas Bannard and Callison-Burch only relied on SMT machinery
to extract phrasal paraphrase rules and stopped there, I take it a few
steps further and build a full English-to-English SMT system. This system
can, as expected, ``translate'' any English input sentence into a new English
sentence with the same degree of meaning preservation that exists in a bilingual
SMT system. In fact, being a state-of-the-art SMT system, it is able to generate
n-best "translations" for any given input sentence. This sentential
paraphraser, built almost entirely from existing SMT machinery, represents
the first 180 degrees of the circle of meaning.
To complete the circle, I describe a novel connection in the other direction.
I claim that the sentential paraphraser, once built in this fashion, can
provide a solution to the reference sparsity problem and, hence, be used
to improve the performance a bilingual SMT system. I discuss two different
instantiations of the sentential paraphraser and show several results that
provide empirical validation for this connection
- âŠ