63 research outputs found

    Do It Like a Syntactician: Using Binary Gramaticality Judgements to Train Sentence Encoders and Assess Their Sensitivity to Syntactic Structure

    Full text link
    The binary nature of grammaticality judgments and their use to access the structure of syntax are a staple of modern linguistics. However, computational models of natural language rarely make use of grammaticality in their training or application. Furthermore, developments in modern neural NLP have produced a myriad of methods that push the baselines in many complex tasks, but those methods are typically not evaluated from a linguistic perspective. In this dissertation I use grammaticality judgements with artificially generated ungrammatical sentences to assess the performance of several neural encoders and propose them as a suitable training target to make models learn specific syntactic rules. I generate artificial ungrammatical sentences via two methods. First by randomly pulling words following the n-gram distribution of a corpus of real sentences (I call these Word salads). Second, by corrupting sentences from a real corpus by altering them (changing verbal or adjectival agreement or removing the main verb). We then train models with an encoder using word embeddings and long short term memory (LSTMs) to discriminate between real sentences and ungrammatical sentences. We show that word salads can be distinguished by the model well for low order n-grams but that the model does not generalize well for higher orders. Furthermore, the word salads do not help the model in recognizing corrupted sentences. We then test the contributions of pre-trained word embeddings, deep LSTM and bidirectional LSTM. We find that the biggest contribution is adding pre-trained word embeddings. We also find that additional layers contribute differently to the performance of unidirectional and bidirectional models and that deeper models have more performance variability across training runs

    Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs

    Get PDF
    Though state-of-the-art sentence representation models can perform tasks requiring significant knowledge of grammar, it is an open question how best to evaluate their grammatical knowledge. We explore five experimental methods inspired by prior work evaluating pretrained sentence representation models. We use a single linguistic phenomenon, negative polarity item (NPI) licensing, as a case study for our experiments. NPIs like any are grammatical only if they appear in a licensing environment like negation (Sue doesn’t have any cats vs. *Sue has any cats). This phenomenon is challenging because of the variety of NPI licensing environments that exist. We introduce an artificially generated dataset that manipulates key features of NPI licensing for the experiments. We find that BERT has significant knowledge of these features, but its success varies widely across different experimental methods. We conclude that a variety of methods is necessary to reveal all relevant aspects of a model’s grammatical knowledge in a given domain.This project was a joint effort by the participants in the Spring 2019 NYU Linguistics seminar course Linguistic Knowledge in Reusable Sentence Encoders. We are grateful to the department for making this seminar possible. This material is based upon work supported by the National Science Foundation under Grant No. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This project has also benefited from financial support to SB by Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation

    On the Difference of BERT-style and CLIP-style Text Encoders

    Full text link
    Masked language modeling (MLM) has been one of the most popular pretraining recipes in natural language processing, e.g., BERT, one of the representative models. Recently, contrastive language-image pretraining (CLIP) has also attracted attention, especially its vision models that achieve excellent performance on a broad range of vision tasks. However, few studies are dedicated to studying the text encoders learned by CLIP. In this paper, we analyze the difference between BERT-style and CLIP-style text encoders from three experiments: (i) general text understanding, (ii) vision-centric text understanding, and (iii) text-to-image generation. Experimental analyses show that although CLIP-style text encoders underperform BERT-style ones for general text understanding tasks, they are equipped with a unique ability, i.e., synesthesia, for the cross-modal association, which is more similar to the senses of humans.Comment: Natural Language Processing. 10 pages, 1 figure. Findings of ACL-202
    • …
    corecore