141 research outputs found

    Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature

    Full text link
    Over the past 50 years many have debated what representation should be used to capture the meaning of natural language utterances. Recently new needs of such representations have been raised in research. Here I survey some of the interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure

    An information extraction tool for microbial characters

    Get PDF
    Automated extraction of phenotypic and metabolic characters from microbial taxonomic descriptions will benefit biology research and study. In this poster, we describe a Microbial Phenomics Information Extractor (MicroPIE) system. MicroPIE takes taxonomic descriptions in XML files as input and can extract 58 types of microbial characters. The main extraction steps are :1) splitting paragraphs into sentences; 2)predicting the characters described in the sentences by using automated classifiers; 3)extracting character values from the sentences by applying a variety of methods, such as Regular Expression Rule, Term Matching, and Unsupervised Semantic Parsing. Parts of the system have been implemented and currently been optimized for better performance. Results on optimizing the sentence classifiers show that the SVMs (Support Vector Machines) achieved better performance over the Naive Bayes classifiers, in addition, resolving the problem of unbalanced training instances helped improve the performance of SVMs

    Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning

    Full text link
    Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data. Our goal is to learn a neural semantic parser when only prior knowledge about a limited number of simple rules is available, without access to either annotated programs or execution results. Our approach is initialized by rules, and improved in a back-translation paradigm using generated question-program pairs from the semantic parser and the question generator. A phrase table with frequent mapping patterns is automatically derived, also updated as training progresses, to measure the quality of generated instances. We train the model with model-agnostic meta-learning to guarantee the accuracy and stability on examples covered by rules, and meanwhile acquire the versatility to generalize well on examples uncovered by rules. Results on three benchmark datasets with different domains and programs show that our approach incrementally improves the accuracy. On WikiSQL, our best model is comparable to the SOTA system learned from denotations

    Transfer Learning for Neural Semantic Parsing

    Full text link
    The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.Comment: Accepted for ACL Repl4NLP 201
    • …
    corecore