164 research outputs found

    Comparison of the effects of Tripterygii totorum and sulfasalazine on rheumatoid arthritis: A retrospective cohort study

    Get PDF
    Purpose: To compare, in a retrospective study, the effects and safety profiles of Tripterygii totorum and sulfasalazine in patients with rheumatoid arthritis (RA) following 24 weeks of treatment. Methods: RA patients (n = 164) who were treated with Tripterygii totorum or sulfasalazine from August 2012 to February 2016 were included in this study. The major end-point was ≥ 20 % improvement as per American College of Rheumatology (ACR) criterion (ACR 20 response) after 24 weeks. Moreover, ACR 50 and ACR 70 responses were studied. The safety parameters investigated comprised of adverse events, vital signs, as well as hematological and biochemical indices (blood counts, electrolyte levels, and kidney and liver function). Results: At 24 weeks, ACR 20 response was 57.32 % in patients on Tripterygii totorum, while the corresponding value in patients on sulfasalazine was 39.02 % (p = 0.02). In the Tripterygii totorum group, ACR 50 response was 41.46 %, while ACR 70 response was 29.27 %. In sulfasalazine group, ACR 50 response was identified in 26.83 % of the patients, while ACR 70 response was seen in 21.95 % of patients. Adverse events were greater in the Tripterygii totorum group than in sulfasalazine group. Conclusion: These results suggest that Tripterygii Totorum significantly mitigates RA, with a tolerable safety profile. However, there is need for long-term or controlled trials to ascertain the therapeutic potential of Tripterygii totorum in RA. Keywords: Traditional Chinese medicine, Tripterygii totorum, Sulfasalazine, Rheumatoid arthriti

    TIM: Teaching Large Language Models to Translate with Comparison

    Full text link
    Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning. However, these models can sometimes struggle with tasks that require more specialized knowledge such as translation. One possible reason for such deficiency is that instruction tuning aims to generate fluent and coherent text that continues from a given instruction without being constrained by any task-specific requirements. Moreover, it can be more challenging for tuning smaller LLMs with lower-quality training data. To address this issue, we propose a novel framework using examples in comparison to teach LLMs to learn translation. Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning. We evaluate our method on WMT2022 test sets and show that it outperforms existing methods. Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations. Please refer to Github for more details: https://github.com/lemon0830/TIM

    Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for Unsupervised Sentence Embedding

    Full text link
    Contrastive learning has become a new paradigm for unsupervised sentence embeddings. Previous studies focus on instance-wise contrastive learning, attempting to construct positive pairs with textual data augmentation. In this paper, we propose a novel Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP). Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts. Using a prototypical contrastive loss, we enforce the anchor sentence embedding to be close to its corresponding semantic prototypes, and far apart from the negative prototypes as well as the prototypes of other sentences. Extensive experimental results on semantic textual similarity, transfer, and clustering tasks demonstrate the effectiveness of our proposed model compared to strong baselines. Code is available at https://github.com/lemon0830/promptCSE.Comment: Findings of EMNLP 202

    Neural Simile Recognition with Cyclic Multitask Learning and Local Attention

    Full text link
    Simile recognition is to detect simile sentences and to extract simile components, i.e., tenors and vehicles. It involves two subtasks: {\it simile sentence classification} and {\it simile component extraction}. Recent work has shown that standard multitask learning is effective for Chinese simile recognition, but it is still uncertain whether the mutual effects between the subtasks have been well captured by simple parameter sharing. We propose a novel cyclic multitask learning framework for neural simile recognition, which stacks the subtasks and makes them into a loop by connecting the last to the first. It iteratively performs each subtask, taking the outputs of the previous subtask as additional inputs to the current one, so that the interdependence between the subtasks can be better explored. Extensive experiments show that our framework significantly outperforms the current state-of-the-art model and our carefully designed baselines, and the gains are still remarkable using BERT.Comment: AAAI 202

    Soft Language Clustering for Multilingual Model Pre-training

    Full text link
    Multilingual pre-trained language models have demonstrated impressive (zero-shot) cross-lingual transfer abilities, however, their performance is hindered when the target language has distant typology from source languages or when pre-training data is limited in size. In this paper, we propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally. Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods. On the tasks of XTREME including text classification, sequence labeling, question answering, and sentence retrieval, both base- and large-size language models pre-trained with our proposed method exhibit consistent performance improvement. Furthermore, it provides substantial advantages for low-resource languages in unsupervised sentence retrieval and for target languages that differ greatly from the source language in cross-lingual transfer
    • …
    corecore