12 research outputs found
Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models
This work presents a new state of the art in reconstruction of surface
realizations from obfuscated text. We identify the lack of sufficient training
data as the major obstacle to training high-performing models, and solve this
issue by generating large amounts of synthetic training data. We also propose
preprocessing techniques which make the structure contained in the input
features more accessible to sequence models. Our models were ranked first on
all evaluation metrics in the English portion of the 2018 Surface Realization
shared task
Abstract Meaning Representation for Multi-Document Summarization
Generating an abstract from a collection of documents is a desirable
capability for many real-world applications. However, abstractive approaches to
multi-document summarization have not been thoroughly investigated. This paper
studies the feasibility of using Abstract Meaning Representation (AMR), a
semantic representation of natural language grounded in linguistic theory, as a
form of content representation. Our approach condenses source documents to a
set of summary graphs following the AMR formalism. The summary graphs are then
transformed to a set of summary sentences in a surface realization step. The
framework is fully data-driven and flexible. Each component can be optimized
independently using small-scale, in-domain training data. We perform
experiments on benchmark summarization datasets and report promising results.
We also describe opportunities and challenges for advancing this line of
research.Comment: 13 page
Automatic Accuracy Prediction for AMR Parsing
Abstract Meaning Representation (AMR) represents sentences as directed,
acyclic and rooted graphs, aiming at capturing their meaning in a machine
readable format. AMR parsing converts natural language sentences into such
graphs. However, evaluating a parser on new data by means of comparison to
manually created AMR graphs is very costly. Also, we would like to be able to
detect parses of questionable quality, or preferring results of alternative
systems by selecting the ones for which we can assess good quality. We propose
AMR accuracy prediction as the task of predicting several metrics of
correctness for an automatically generated AMR parse - in absence of the
corresponding gold parse. We develop a neural end-to-end multi-output
regression model and perform three case studies: firstly, we evaluate the
model's capacity of predicting AMR parse accuracies and test whether it can
reliably assign high scores to gold parses. Secondly, we perform parse
selection based on predicted parse accuracies of candidate parses from
alternative systems, with the aim of improving overall results. Finally, we
predict system ranks for submissions from two AMR shared tasks on the basis of
their predicted parse accuracy averages. All experiments are carried out across
two different domains and show that our method is effective.Comment: accepted at *SEM 201
AMR Dependency Parsing with a Typed Semantic Algebra
We present a semantic parser for Abstract Meaning Representations which
learns to parse strings into tree representations of the compositional
structure of an AMR graph. This allows us to use standard neural techniques for
supertagging and dependency tree parsing, constrained by a linguistically
principled type system. We present two approximative decoding algorithms, which
achieve state-of-the-art accuracy and outperform strong baselines.Comment: This paper will be presented at ACL 2018 (see
https://acl2018.org/programme/papers/
Disentangling the Properties of Human Evaluation Methods:A Classification System to Support Comparability, Meta-Evaluation and Reproducibility Testing
Current standards for designing and reporting human evaluations in NLP mean it is generally unclear which evaluations are comparable and can be expected to yield similar results when applied to the same system outputs. This has serious implications for reproducibility testing and meta-evaluation, in particular given that human evaluation is considered the gold standard against which the trustworthiness of automatic metrics is gauged. %and merging others, as well as deciding which evaluations should be able to reproduce each other’s results. Using examples from NLG, we propose a classification system for evaluations based on disentangling (i) what is being evaluated (which aspect of quality), and (ii) how it is evaluated in specific (a) evaluation modes and (b) experimental designs. We show that this approach provides a basis for determining comparability, hence for comparison of evaluations across papers, meta-evaluation experiments, reproducibility testing