11,459 research outputs found
Automatic Accuracy Prediction for AMR Parsing
Abstract Meaning Representation (AMR) represents sentences as directed,
acyclic and rooted graphs, aiming at capturing their meaning in a machine
readable format. AMR parsing converts natural language sentences into such
graphs. However, evaluating a parser on new data by means of comparison to
manually created AMR graphs is very costly. Also, we would like to be able to
detect parses of questionable quality, or preferring results of alternative
systems by selecting the ones for which we can assess good quality. We propose
AMR accuracy prediction as the task of predicting several metrics of
correctness for an automatically generated AMR parse - in absence of the
corresponding gold parse. We develop a neural end-to-end multi-output
regression model and perform three case studies: firstly, we evaluate the
model's capacity of predicting AMR parse accuracies and test whether it can
reliably assign high scores to gold parses. Secondly, we perform parse
selection based on predicted parse accuracies of candidate parses from
alternative systems, with the aim of improving overall results. Finally, we
predict system ranks for submissions from two AMR shared tasks on the basis of
their predicted parse accuracy averages. All experiments are carried out across
two different domains and show that our method is effective.Comment: accepted at *SEM 201
Learning Graph Embeddings from WordNet-based Similarity Measures
We present path2vec, a new approach for learning graph embeddings that relies
on structural measures of pairwise node similarities. The model learns
representations for nodes in a dense space that approximate a given
user-defined graph distance measure, such as e.g. the shortest path distance or
distance measures that take information beyond the graph structure into
account. Evaluation of the proposed model on semantic similarity and word sense
disambiguation tasks, using various WordNet-based similarity measures, show
that our approach yields competitive results, outperforming strong graph
embedding baselines. The model is computationally efficient, being orders of
magnitude faster than the direct computation of graph-based distances.Comment: Accepted to StarSem 201
- …