6 research outputs found

    Automatic Accuracy Prediction for AMR Parsing

    Full text link
    Abstract Meaning Representation (AMR) represents sentences as directed, acyclic and rooted graphs, aiming at capturing their meaning in a machine readable format. AMR parsing converts natural language sentences into such graphs. However, evaluating a parser on new data by means of comparison to manually created AMR graphs is very costly. Also, we would like to be able to detect parses of questionable quality, or preferring results of alternative systems by selecting the ones for which we can assess good quality. We propose AMR accuracy prediction as the task of predicting several metrics of correctness for an automatically generated AMR parse - in absence of the corresponding gold parse. We develop a neural end-to-end multi-output regression model and perform three case studies: firstly, we evaluate the model's capacity of predicting AMR parse accuracies and test whether it can reliably assign high scores to gold parses. Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results. Finally, we predict system ranks for submissions from two AMR shared tasks on the basis of their predicted parse accuracy averages. All experiments are carried out across two different domains and show that our method is effective.Comment: accepted at *SEM 201

    An empirical evaluation of AMR parsing for legal documents

    Full text link
    Many approaches have been proposed to tackle the problem of Abstract Meaning Representation (AMR) parsing, helps solving various natural language processing issues recently. In our paper, we provide an overview of different methods in AMR parsing and their performances when analyzing legal documents. We conduct experiments of different AMR parsers on our annotated dataset extracted from the English version of Japanese Civil Code. Our results show the limitations as well as open a room for improvements of current parsing techniques when applying in this complicated domain

    Modeling Meaning for Description and Interaction

    Get PDF
    Language is a powerful tool for communication and coordination, allowing us to share thoughts, ideas, and instructions with others. Accordingly, enabling people to communicate linguistically with digital agents has been among the longest-standing goals in artificial intelligence (AI). However, unlike humans, machines do not naturally acquire the ability to extract meaning from language. One natural solution to this problem is to represent meaning in a structured format and then develop models for processing language into such structures. Unlike natural language, these structured representations can be directly processed and interpreted by existing algorithms. Indeed, much of the digital infrastructure we have built is mediated by structured representations (e.g. programs and APIs). Furthermore, unlike the internal representations of current neural models, structured representations are built to be used and interpreted by people. I focus on methods for parsing language into these dually-interpretable representations of meaning. I introduce models that learn to predict structure from language and apply them to a variety of tasks, ranging from linguistic description to interaction with robots and digital assistants. I address three thematic challenges in modeling meaning: abstraction, sensitivity, and ambiguity. In order to be useful, meaning representations must abstract away from the linguistic input. Abstractions differ for each representation used, and must be learned by the model. The process of abstraction entails a kind of invariance: different linguistic inputs mapping to the same meaning. At the same time, meaning is sensitive to slight changes in the linguistic input; here, similar inputs might map to very different meanings. Finally, language is often ambiguous, and many utterances have multiple meanings. In cases of ambiguity, models of meaning must learn that the same input can map to different meanings
    corecore