78 research outputs found

    Improved CCG Parsing with Semi-supervised Supertagging

    Get PDF
    Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an im-portant goal. We show how a state-of-the-art CCG parser can be enhanced, by pre-dicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical cate-gories without depending on a POS-tagger. Our approach leads to substantial improve-ments in dependency parsing results over the standard supervised CCG parser when evalu-ated on Wall Street Journal (0.8%), Wikipedia (1.8%) and biomedical (3.4%) text. We com-pare the performance of two recently proposed approaches for classification using a wide va-riety of word embeddings. We also give a de-tailed error analysis demonstrating where us-ing embeddings outperforms traditional fea-ture sets, and showing how including POS fea-tures can decrease accuracy

    Keystroke dynamics as signal for shallow syntactic parsing

    Full text link
    Keystroke dynamics have been extensively used in psycholinguistic and writing research to gain insights into cognitive processing. But do keystroke logs contain actual signal that can be used to learn better natural language processing models? We postulate that keystroke dynamics contain information about syntactic structure that can inform shallow syntactic parsing. To test this hypothesis, we explore labels derived from keystroke logs as auxiliary task in a multi-task bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising results on two shallow syntactic parsing tasks, chunking and CCG supertagging. Our model is simple, has the advantage that data can come from distinct sources, and produces models that are significantly better than models trained on the text annotations alone.Comment: In COLING 201

    Generating CCG Categories

    Full text link
    Previous CCG supertaggers usually predict categories using multi-class classification. Despite their simplicity, internal structures of categories are usually ignored. The rich semantics inside these structures may help us to better handle relations among categories and bring more robustness into existing supertaggers. In this work, we propose to generate categories rather than classify them: each category is decomposed into a sequence of smaller atomic tags, and the tagger aims to generate the correct sequence. We show that with this finer view on categories, annotations of different categories could be shared and interactions with sentence contexts could be enhanced. The proposed category generator is able to achieve state-of-the-art tagging (95.5% accuracy) and parsing (89.8% labeled F1) performances on the standard CCGBank. Furthermore, its performances on infrequent (even unseen) categories, out-of-domain texts and low resource language give promising results on introducing generation models to the general CCG analyses.Comment: Accepted by AAAI 202

    Shift-Reduce CCG Parsing using Neural Network Models

    Get PDF

    A* CCG Parsing with a Supertag-factored Model

    Get PDF
    We introduce a new CCG parsing model which is factored on lexical category as-signments. Parsing is then simply a de-terministic search for the most probable category sequence that supports a CCG derivation. The parser is extremely simple, with a tiny feature set, no POS tagger, and no statistical model of the derivation or dependencies. Formulating the model in this way allows a highly effective heuris-tic for A ∗ parsing, which makes parsing extremely fast. Compared to the standard C&C CCG parser, our model is more ac-curate out-of-domain, is four times faster, has higher coverage, and is greatly simpli-fied. We also show that using our parser improves the performance of a state-of-the-art question answering system.
    • …
    corecore