10,272 research outputs found

    Gradient-based Inference for Networks with Output Constraints

    Full text link
    Practitioners apply neural networks to increasingly complex problems in natural language processing, such as syntactic parsing and semantic role labeling that have rich output structures. Many such structured-prediction problems require deterministic constraints on the output values; for example, in sequence-to-sequence syntactic parsing, we require that the sequential outputs encode valid trees. While hidden units might capture such properties, the network is not always able to learn such constraints from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing rule-based post-processing or expensive discrete search. Instead, in the spirit of gradient-based training, we enforce constraints with gradient-based inference (GBI): for each input at test-time, we nudge continuous model weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We study the efficacy of GBI on three tasks with hard constraints: semantic role labeling, syntactic parsing, and sequence transduction. In each case, the algorithm not only satisfies constraints but improves accuracy, even when the underlying network is state-of-the-art.Comment: AAAI 201

    Exploring notions of genre in 'academic literacies' and 'writing across the curriculum': approaches across countries and contexts

    Get PDF
    The SIGET IV panel on genre in Writing Across the Curriculum (WAC) and “academic literacies” (ACLITS) has set rolling a discussion of the similarities and differences in the two traditions, the former originating in the US in the early 1970s, the latter originating in England in the early 1990s. This paper maps out some elements of each in relation to the other and to genre, which we hope will set in motion further discussions and cross-fertilization

    Multilingual and multimodal resources in L2 English content classrooms

    Get PDF
    postprin

    The structural role of the core literature in history

    Full text link
    The intellectual landscapes of the humanities are mostly uncharted territory. Little is known on the ways published research of humanist scholars defines areas of intellectual activity. An open question relates to the structural role of core literature: highly cited sources, naturally playing a disproportionate role in the definition of intellectual landscapes. We introduce four indicators in order to map the structural role played by core sources into connecting different areas of the intellectual landscape of citing publications (i.e. communities in the bibliographic coupling network). All indicators factor out the influence of degree distributions by internalizing a null configuration model. By considering several datasets focused on history, we show that two distinct structural actions are performed by the core literature: a global one, by connecting otherwise separated communities in the landscape, or a local one, by rising connectivity within communities. In our study, the global action is mainly performed by small sets of scholarly monographs, reference works and primary sources, while the rest of the core, and especially most journal articles, acts mostly locally

    Revisiting Pre-Trained Models for Chinese Natural Language Processing

    Full text link
    Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models. In this paper, we target on revisiting Chinese pre-trained language models to examine their effectiveness in a non-English language and release the Chinese pre-trained language model series to the community. We also propose a simple but effective model called MacBERT, which improves upon RoBERTa in several ways, especially the masking strategy that adopts MLM as correction (Mac). We carried out extensive experiments on eight Chinese NLP tasks to revisit the existing pre-trained language models as well as the proposed MacBERT. Experimental results show that MacBERT could achieve state-of-the-art performances on many NLP tasks, and we also ablate details with several findings that may help future research. Resources available: https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202

    Investigating Multilingual Coreference Resolution by Universal Annotations

    Full text link
    Multilingual coreference resolution (MCR) has been a long-standing and challenging task. With the newly proposed multilingual coreference dataset, CorefUD (Nedoluzhko et al., 2022), we conduct an investigation into the task by using its harmonized universal morphosyntactic and coreference annotations. First, we study coreference by examining the ground truth data at different linguistic levels, namely mention, entity and document levels, and across different genres, to gain insights into the characteristics of coreference across multiple languages. Second, we perform an error analysis of the most challenging cases that the SotA system fails to resolve in the CRAC 2022 shared task using the universal annotations. Last, based on this analysis, we extract features from universal morphosyntactic annotations and integrate these features into a baseline system to assess their potential benefits for the MCR task. Our results show that our best configuration of features improves the baseline by 0.9% F1 score.Comment: Accepted at Findings of EMNLP202
    • …
    corecore