21 research outputs found

    READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

    Full text link
    For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations1 or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from https://github.com/thunlp/READIN.Comment: Preprin

    Id2 promotes the invasive growth of MCF-7 and SKOV-3 cells by a novel mechanism independent of dimerization to basic helix-loop-helix factors

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Inhibitor of differentiation 2 (<it>Id2</it>) is a critical factor for cell proliferation and differentiation in normal vertebrate development. Most of the biological function of Id2 has been ascribed to its helix-loop-helix motif. Overexpression of Id2 is frequently observed in various human tumors, but its role for invasion potential in tumor cells is dispute. We aimed to reveal the role of Id2 in invasion potential in poorly invasive and estrogen receptor α (ERα)-positive MCF-7 and SKOV-3 cancer cells.</p> <p>Methods</p> <p>MCF-7 and SKOV-3 cells were stably transfected with the wild-type, degradation-resistant full-length or helix-loop-helix (HLH)-deleted Id2, respectively. Protein levels of Id2 and its mutants and E-cadherin were determined by western blot analysis and mRNA levels of Id2 and its mutants were determined by RT-PCR. The effects of Id2 and its mutants on cell proliferation were determined by [<sup>3</sup>H]-thymidine incorporation assay and the 3- [4, 5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) dye method. The <it>in vitro </it>invasion potential of cells was evaluated by Transwell assay. Cell motility was assessed by scratch wound assay. The promoter activity of <it>E-cadherin </it>was determined by cotransfection and luciferase assays.</p> <p>Results</p> <p>Ectopic transfection of the wild-type Id2 markedly increased the protein and mRNA expression of <it>Id2 </it>in MCF-7 and SKOV-3 cells; the protein level but not mRNA level was further increased by transfection with the degradation-resistant Id2 form. The ectopic expression of Id2 or its mutants did not alter proliferation of either MCF-7 or SKOV-3 cells. Transfection of the wild-type Id2 significantly induced the invasion potential and migratory capacity of cells, which was further augmented by transfection with the degradation-resistant full-length or HLH-deleted Id2. E-cadherin protein expression and transactivation of the proximal E-cadherin promoter were markedly suppressed by the degradation-resistant full-length or HLH-deleted Id2 but not wild-type Id2. Ectopic expression of E-cadherin in MCF-7 and SKOV-3 cells only partially blunted the invasion potential induced by the degradation-resistant HLH-deleted Id2.</p> <p>Conclusion</p> <p>Overexpression of Id2 in ERα-positive epithelial tumor cells indeed increases the cells' invasive potential through a novel mechanism independent of dimerization to basic helix-loop-helix factors. E-cadherin contributes only in part to Id2-induced cell invasion when Id2 is accumulated to a higher level in some specific cell types.</p

    Prompting GPT-3 To Be Reliable

    Full text link
    Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3's reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.Comment: ICLR 202

    Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong

    Full text link
    Large Language Models (LLMs) are increasingly used for accessing information on the web. Their truthfulness and factuality are thus of great interest. To help users make the right decisions about the information they're getting, LLMs should not only provide but also help users fact-check information. In this paper, we conduct experiments with 80 crowdworkers in total to compare language models with search engines (information retrieval systems) at facilitating fact-checking by human users. We prompt LLMs to validate a given claim and provide corresponding explanations. Users reading LLM explanations are significantly more efficient than using search engines with similar accuracy. However, they tend to over-rely the LLMs when the explanation is wrong. To reduce over-reliance on LLMs, we ask LLMs to provide contrastive information - explain both why the claim is true and false, and then we present both sides of the explanation to users. This contrastive explanation mitigates users' over-reliance on LLMs, but cannot significantly outperform search engines. However, showing both search engine results and LLM explanations offers no complementary benefits as compared to search engines alone. Taken together, natural language explanations by LLMs may not be a reliable replacement for reading the retrieved passages yet, especially in high-stakes settings where over-relying on wrong AI explanations could lead to critical consequences.Comment: preprin

    Sub-Character Tokenization for Chinese Pretrained Language Models

    Full text link
    Tokenization is fundamental to pretrained language models (PLMs). Existing tokenization methods for Chinese PLMs typically treat each character as an indivisible token. However, they ignore the unique feature of the Chinese writing system where additional linguistic information exists below the character level, i.e., at the sub-character level. To utilize such information, we propose sub-character (SubChar for short) tokenization. Specifically, we first encode the input text by converting each Chinese character into a short sequence based on its glyph or pronunciation, and then construct the vocabulary based on the encoded text with sub-word tokenization. Experimental results show that SubChar tokenizers have two main advantages over existing tokenizers: 1) They can tokenize inputs into much shorter sequences, thus improving the computational efficiency. 2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to all homophone typos. At the same time, models trained with SubChar tokenizers perform competitively on downstream tasks. We release our code at https://github.com/thunlp/SubCharTokenization to facilitate future work.Comment: This draft supersedes the previous version named "SHUOWEN-JIEZI: Linguistically Informed Tokenizers For Chinese Language Model Pretraining

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Dataset Mention Extraction and Classification

    No full text
    10.18653/v1/w19-2604Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publication
    corecore