5 research outputs found

    Automatic Accuracy Prediction for AMR Parsing

    Full text link
    Abstract Meaning Representation (AMR) represents sentences as directed, acyclic and rooted graphs, aiming at capturing their meaning in a machine readable format. AMR parsing converts natural language sentences into such graphs. However, evaluating a parser on new data by means of comparison to manually created AMR graphs is very costly. Also, we would like to be able to detect parses of questionable quality, or preferring results of alternative systems by selecting the ones for which we can assess good quality. We propose AMR accuracy prediction as the task of predicting several metrics of correctness for an automatically generated AMR parse - in absence of the corresponding gold parse. We develop a neural end-to-end multi-output regression model and perform three case studies: firstly, we evaluate the model's capacity of predicting AMR parse accuracies and test whether it can reliably assign high scores to gold parses. Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results. Finally, we predict system ranks for submissions from two AMR shared tasks on the basis of their predicted parse accuracy averages. All experiments are carried out across two different domains and show that our method is effective.Comment: accepted at *SEM 201

    From text to graph: a general transition-based AMR parsing using neural network

    Full text link
    © 2020, Springer-Verlag London Ltd., part of Springer Nature. Semantic understanding is an essential research issue for many applications, such as social network analysis, collective intelligence and content computing, which tells the inner meaning of language form. Recently, Abstract Meaning Representation (AMR) is attracted by many researchers for its semantic representation ability on an entire sentence. However, due to the non-projectivity and reentrancy properties of AMR graphs, they lose some important semantic information in parsing from sentences. In this paper, we propose a general AMR parsing model which utilizes a two-stack-based transition algorithm for both Chinese and English datasets. It can incrementally parse sentences to AMR graphs in linear time. Experimental results demonstrate that it is superior in recovering reentrancy and handling arcs while is competitive with other transition-based neural network models on both English and Chinese datasets
    corecore