29 research outputs found

    Multi-task Active Learning for Pre-trained Transformer-based Models

    Full text link
    Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models.Comment: Accepted for publication in Transactions of the Association for Computational Linguistics (TACL), 2022. Pre-MIT Press publication versio

    Tissue-specific expression of calcium channels

    Get PDF
    The high-voltage-activated calcium channel is a multimeric protein complex containing 1, 2/δ, β, and γ subunits. The 1 subunit is the ion conduction channel and contains the binding sites for calcium channel blockers and toxins. Three genes code for distinct L-type, dihydropyridine-sensitive 1 subunits; one gene codes for the neuronal P-type (Purkinje) 1 subunit; and one gene codes for the neuronal N-type 1 subunit. The smooth and cardiac muscle L-type calcium channel 1 subunits are splice variants of the same gene. The 1 subunits are coexpressed with a common 2/δ subunit and tissue-specific β subunits (at least three genes). The γ subunit apparently is expressed only in skeletal muscle. The properties of these cloned and expressed calcium channels are discussed here

    Introduction: Toward an Engaged Feminist Heritage Praxis

    Get PDF
    We advocate a feminist approach to archaeological heritage work in order to transform heritage practice and the production of archaeological knowledge. We use an engaged feminist standpoint and situate intersubjectivity and intersectionality as critical components of this practice. An engaged feminist approach to heritage work allows the discipline to consider women’s, men’s, and gender non-conforming persons’ positions in the field, to reveal their contributions, to develop critical pedagogical approaches, and to rethink forms of representation. Throughout, we emphasize the intellectual labor of women of color, queer and gender non-conforming persons, and early white feminists in archaeology

    Learning Discrete Structured Variational Auto-Encoder using Natural Evolution Strategies

    Full text link
    Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning. In many real-life settings, the discrete latent space consists of high-dimensional structures, and propagating gradients through the relevant structures often requires enumerating over an exponentially large latent space. Recently, various approaches were devised to propagate approximated gradients without enumerating over the space of possible structures. In this work, we use Natural Evolution Strategies (NES), a class of gradient-free black-box optimization algorithms, to learn discrete structured VAEs. The NES algorithms are computationally appealing as they estimate gradients with forward pass evaluations only, thus they do not require to propagate gradients through their discrete structures. We demonstrate empirically that optimizing discrete structured VAEs using NES is as effective as gradient-based approximations. Lastly, we prove NES converges for non-Lipschitz functions as appear in discrete structured VAEs.Comment: Published as a conference paper at ICLR 202
    corecore