88 research outputs found

    Key clinical studies on changing clinical practice of advanced breast cancer in 2022

    Get PDF
    With the improvement of comprehensive treatment of breast cancer and the continuous development of anti-tumor drugs, the survival time of breast cancer patients, especially advanced breast cancer, has been further extended. In recent years, the treatment of advanced breast cancer has ushered an era of fine classification and precise tiered therapy. In 2022, many breakthroughs have been made in the field of advanced breast cancer research. With changes in the treatment of each subtype, some treatment schemes affecting clinical practice have been incorporated into treatment guidelines. The treatment for hormone-receptor-positive advanced breast cancer focuses on patients who have failed treatment with cyclin-dependent kinase 4 and 6 (CDK4/6) inhibitors. Novel anti-human epidermal growth factor receptor 2 (HER2) antibody-drug conjugates (ADC) in advanced HER2 positive breast cancer are the focus of research. More research evidence is needed for immunotherapy in advanced triple-negative breast cancer (TNBC), and the treatment with ADC targeting Trop-2 has been effective. Treatment with ADC in HER2-low breast cancer are changing clinical practice. In this article, we summarized the research progress of different types of advanced breast cancer in this year, in order to better guide the individualized treatment and improve the prognosis of advanced breast cancer patients

    Synthesis and antiviral activity of a series of novel N-phenylbenzamide and N-phenylacetophenone compounds as anti-HCV and anti-EV71 agents

    Get PDF
    AbstractA series of novel N-phenylbenzamide and N-phenylacetophenone compounds were synthesized and evaluated for their antiviral activity against HCV and EV71 (strain SZ-98). The biological results showed that three compounds (23, 25 and 41) exhibited considerable anti-HCV activity (IC50=0.57–7.12μmol/L) and several compounds (23, 28, 29, 30, 31 and 42) displayed potent activity against EV71 with the IC50 values lower than 5.00μmol/L. The potency of compound 23 (IC50=0.57μmol/L) was superior to that of reported compounds IMB-1f (IC50=1.90μmol/L) and IMB-1g (IC50=1.00μmol/L) as anti-HCV agents, and compound 29 possessed the highest anti-EV71 activity, comparable to the comparator drug pirodavir. The efficacy in vivo and antiviral mechanism of these compounds warrant further investigations

    Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text

    Full text link
    Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples. These approaches either train a language model from scratch or fine-tune a pre-trained language model using ID examples, and then take perplexity as output by the language model as OoD scores. In this paper, we analyse the complementary characteristics of both OoD detection methods and propose a multi-level knowledge distillation approach to integrate their strengths, while mitigating their limitations. Specifically, we use a fine-tuned model as the teacher to teach a randomly initialized student model on the ID examples. Besides the prediction layer distillation, we present a similarity-based intermediate layer distillation method to facilitate the student's awareness of the information flow inside the teacher's layers. In this way, the derived student model gains the teacher's rich knowledge about the ID data manifold due to pre-training, while benefiting from seeing only ID examples during parameter learning, which promotes more distinguishable features for OoD detection. We conduct extensive experiments over multiple benchmark datasets, i.e., CLINC150, SST, 20 NewsGroups, and AG News; showing that the proposed method yields new state-of-the-art performance.Comment: 11 page

    CoLaDa: A Collaborative Label Denoising Framework for Cross-lingual Named Entity Recognition

    Full text link
    Cross-lingual named entity recognition (NER) aims to train an NER system that generalizes well to a target language by leveraging labeled data in a given source language. Previous work alleviates the data scarcity problem by translating source-language labeled data or performing knowledge distillation on target-language unlabeled data. However, these methods may suffer from label noise due to the automatic labeling process. In this paper, we propose CoLaDa, a Collaborative Label Denoising Framework, to address this problem. Specifically, we first explore a model-collaboration-based denoising scheme that enables models trained on different data sources to collaboratively denoise pseudo labels used by each other. We then present an instance-collaboration-based strategy that considers the label consistency of each token's neighborhood in the representation space for denoising. Experiments on different benchmark datasets show that the proposed CoLaDa achieves superior results compared to previous methods, especially when generalizing to distant languages.Comment: ACL 2023. Our code is available at https://github.com/microsoft/vert-papers/tree/master/papers/CoLaD

    LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression

    Full text link
    In long context scenarios, large language models (LLMs) face three main challenges: higher computational/financial cost, longer latency, and inferior performance. Some studies reveal that the performance of LLMs depends on both the density and the position of the key information (question relevant) in the input prompt. Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs' perception of the key information to simultaneously address the three challenges. We conduct evaluation on a wide range of long context scenarios including single-/multi-document QA, few-shot learning, summarization, synthetic tasks, and code completion. The experimental results show that LongLLMLingua compressed prompt can derive higher performance with much less cost. The latency of the end-to-end system is also reduced. For example, on NaturalQuestions benchmark, LongLLMLingua gains a performance boost of up to 17.1% over the original prompt with ~4x fewer tokens as input to GPT-3.5-Turbo. It can derive cost savings of \$28.5 and \$27.4 per 1,000 samples from the LongBench and ZeroScrolls benchmark, respectively. Additionally, when compressing prompts of ~10k tokens at a compression rate of 2x-10x, LongLLMLingua can speed up the end-to-end latency by 1.4x-3.8x. Our code is available at https://aka.ms/LLMLingua

    Attentive Mask CLIP

    Full text link
    Image token removal is an efficient augmentation strategy for reducing the cost of computing image features. However, this efficient augmentation strategy has been found to adversely affect the accuracy of CLIP-based training. We hypothesize that removing a large portion of image tokens may improperly discard the semantic content associated with a given text description, thus constituting an incorrect pairing target in CLIP training. To address this issue, we propose an attentive token removal approach for CLIP training, which retains tokens with a high semantic correlation to the text description. The correlation scores are computed in an online fashion using the EMA version of the visual encoder. Our experiments show that the proposed attentive masking approach performs better than the previous method of random token removal for CLIP training. The approach also makes it efficient to apply multiple augmentation views to the image, as well as introducing instance contrastive learning tasks between these views into the CLIP framework. Compared to other CLIP improvements that combine different pre-training targets such as SLIP and MaskCLIP, our method is not only more effective, but also much more efficient. Specifically, using ViT-B and YFCC-15M dataset, our approach achieves 43.9%43.9\% top-1 accuracy on ImageNet-1K zero-shot classification, as well as 62.7/42.162.7/42.1 and 38.0/23.238.0/23.2 I2T/T2I retrieval accuracy on Flickr30K and MS COCO, which are +1.1%+1.1\%, +5.5/+0.9+5.5/+0.9, and +4.4/+1.3+4.4/+1.3 higher than the SLIP method, while being 2.30×2.30\times faster. An efficient version of our approach running 1.16×1.16\times faster than the plain CLIP model achieves significant gains of +5.3%+5.3\%, +11.3/+8.0+11.3/+8.0, and +9.5/+4.9+9.5/+4.9 on these benchmarks
    • …
    corecore