4 research outputs found
Automatic Accuracy Prediction for AMR Parsing
Abstract Meaning Representation (AMR) represents sentences as directed,
acyclic and rooted graphs, aiming at capturing their meaning in a machine
readable format. AMR parsing converts natural language sentences into such
graphs. However, evaluating a parser on new data by means of comparison to
manually created AMR graphs is very costly. Also, we would like to be able to
detect parses of questionable quality, or preferring results of alternative
systems by selecting the ones for which we can assess good quality. We propose
AMR accuracy prediction as the task of predicting several metrics of
correctness for an automatically generated AMR parse - in absence of the
corresponding gold parse. We develop a neural end-to-end multi-output
regression model and perform three case studies: firstly, we evaluate the
model's capacity of predicting AMR parse accuracies and test whether it can
reliably assign high scores to gold parses. Secondly, we perform parse
selection based on predicted parse accuracies of candidate parses from
alternative systems, with the aim of improving overall results. Finally, we
predict system ranks for submissions from two AMR shared tasks on the basis of
their predicted parse accuracy averages. All experiments are carried out across
two different domains and show that our method is effective.Comment: accepted at *SEM 201
Bootstrapping Multilingual AMR with Contextual Word Alignments
We develop high performance multilingualAbstract Meaning Representation (AMR)
sys-tems by projecting English AMR annotationsto other languages with weak
supervision. Weachieve this goal by bootstrapping transformer-based
multilingual word embeddings, in partic-ular those from cross-lingual RoBERTa
(XLM-R large). We develop a novel technique forforeign-text-to-English AMR
alignment, usingthe contextual word alignment between En-glish and foreign
language tokens. This wordalignment is weakly supervised and relies onthe
contextualized XLM-R word embeddings.We achieve a highly competitive
performancethat surpasses the best published results forGerman, Italian,
Spanish and Chinese
Recommended from our members
Abstract Meaning Representation Parsing with Rich Linguistic Features
Lexical and syntactic information have been shown to play important roles in semantic parsing. However, there is still no solid research on the relationship between semantic parsing and different types of linguistic knowledge that support this, e.g., lexical cues, dependency structures, semantic roles, etc. It is also known that dependency structures provide rich syntactic information for various NLP applications. Yet, few applications use dependency structures in an underlying neural network framework. This dissertation introduces a complete framework designed to parse Abstract Meaning Representations (AMRs), a semantic representation that expresses the meaning of a sentence as a directed acyclic graph. To enhance our AMR parser, we first develop a light verb construction (LVC) detector using a SVM. We also link input dependency parses to AMR concepts taking an EM-based approach to generate alignment pairs.
The main parser is split into three sub-components: a frame identifier, a concept identifier, and a transition action identifier. To support these components, we develop a Recursive Neural Network (RevNN) based model as the underlying framework of all three components. RevNN is based on dependency structures combined with distinct linguistic features. RevNN generates a corresponding vector representation for each dependency node, passing these vectors to the three identifiers as the underlying framework. By integrating all the above components, we design a transition-based parser which generates AMR graphs from input dependency parses.
Results show that our LVC detector surpasses comparable systems by 3 to 4% in F1 score, and that this LVC detector supports the AMR parser. Our aligner improves F1 score by 2 to 5% with LVCs information. Moreover, the resulting AMR parser achieves the best Smatch scores among other transition-based AMR parsers. We also show that the RevNN framework helps to integrate different linguistic features for improvement in accuracy of individual components.</p