874 research outputs found

    Self-Guided Contrastive Learning for BERT Sentence Representations

    Full text link
    Although BERT and its variants have reshaped the NLP landscape, it still remains unclear how best to derive sentence embeddings from such pre-trained Transformers. In this work, we propose a contrastive learning method that utilizes self-guidance for improving the quality of BERT sentence representations. Our method fine-tunes BERT in a self-supervised fashion, does not rely on data augmentation, and enables the usual [CLS] token embeddings to function as sentence vectors. Moreover, we redesign the contrastive learning objective (NT-Xent) and apply it to sentence representation learning. We demonstrate with extensive experiments that our approach is more effective than competitive baselines on diverse sentence-related tasks. We also show it is efficient at inference and robust to domain shifts.Comment: ACL 202

    The full repertoire of Drosophila gustatory receptors for detecting an aversive compound.

    Get PDF
    The ability to detect toxic compounds in foods is essential for animal survival. However, the minimal subunit composition of gustatory receptors required for sensing aversive chemicals in Drosophila is unknown. Here we report that three gustatory receptors, GR8a, GR66a and GR98b function together in the detection of L-canavanine, a plant-derived insecticide. Ectopic co-expression of Gr8a and Gr98b in Gr66a-expressing, bitter-sensing gustatory receptor neurons (GRNs) confers responsiveness to L-canavanine. Furthermore, misexpression of all three Grs enables salt- or sweet-sensing GRNs to respond to L-canavanine. Introduction of these Grs in sweet-sensing GRNs switches L-canavanine from an aversive to an attractive compound. Co-expression of GR8a, GR66a and GR98b in Drosophila S2 cells induces an L-canavanine-activated nonselective cation conductance. We conclude that three GRs collaborate to produce a functional L-canavanine receptor. Thus, our results clarify the full set of GRs underlying the detection of a toxic tastant that drives avoidance behaviour in an insect

    Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners

    Full text link
    Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly enhances the input representations closing the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.Comment: AAAI 202
    corecore