377 research outputs found

    Magnetic Fe3O4 nanoparticles and chemotherapy agents interact synergistically to induce apoptosis in lymphoma cells

    Get PDF
    The purpose of this study was to investigate the potential effects of combination therapy using magnetic nanoparticles of Fe3O4 (MNP-Fe3O4) and chemotherapeutic drugs on lymphoma cells. Proliferation, inhibition, and viability of Raji cells were detected by MTT and trypan blue exclusion. The percentage of cells undergoing apoptosis was detected by flow cytometry using fluorescein isothiocyanate-annexin V and propidium iodide staining. p53 and nuclear factor-κB (NF-κB) protein levels were measured by Western blot. The results showed that proliferation of Raji cells was inhibited by adriamycin or daunorubicin in a dose-and time-dependent manner. Cell sensitivity was improved and the 50% inhibitory concentrations of adriamycin and daunorubicin decreased when combined with a MNP-Fe3O4 carrier. Interestingly, increased apoptosis in Raji lymphoma cells was accompanied by upregulation of p53 protein and downregulation of NF-κB protein. Furthermore, the combination of MNP-Fe3O4 with adriamycin or daunorubicin increased p53 protein levels and decreased NF-κB protein levels more than adriamycin or daunorubicin alone, indicating that MNP-Fe3O4 could enhance the effect of chemotherapeutic drugs on p53 and NF-κB. Similar results for cell apoptosis and protein expression were not observed for the groups treated with dexamethasone ± MNP-Fe 3O4 (P > 0.05). These findings suggest a potential clinical application for MNP-Fe3O4 in combination with daunorubicin or adriamycin in the treatment of lymphoma

    CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation

    Full text link
    Existing reference-free metrics have obvious limitations for evaluating controlled text generation models. Unsupervised metrics can only provide a task-agnostic evaluation result which correlates weakly with human judgments, whereas supervised ones may overfit task-specific data with poor generalization ability to other datasets. In this paper, we propose an unsupervised reference-free metric called CTRLEval, which evaluates controlled text generation from different aspects by formulating each aspect into multiple text infilling tasks. On top of these tasks, the metric assembles the generation probabilities from a pre-trained language model without any model training. Experimental results show that our metric has higher correlations with human judgments than other baselines, while obtaining better generalization of evaluating generated texts from different models and with different qualities.Comment: Accepted by ACL 2022 (Main Conference

    DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering

    Full text link
    Existing evaluation metrics for natural language generation (NLG) tasks face the challenges on generalization ability and interpretability. Specifically, most of the well-performed metrics are required to train on evaluation datasets of specific NLG tasks and evaluation dimensions, which may cause over-fitting to task-specific datasets. Furthermore, existing metrics only provide an evaluation score for each dimension without revealing the evidence to interpret how this score is obtained. To deal with these challenges, we propose a simple yet effective metric called DecompEval. This metric formulates NLG evaluation as an instruction-style question answering task and utilizes instruction-tuned pre-trained language models (PLMs) without training on evaluation datasets, aiming to enhance the generalization ability. To make the evaluation process more interpretable, we decompose our devised instruction-style question about the quality of generated texts into the subquestions that measure the quality of each sentence. The subquestions with their answers generated by PLMs are then recomposed as evidence to obtain the evaluation result. Experimental results show that DecompEval achieves state-of-the-art performance in untrained metrics for evaluating text summarization and dialogue generation, which also exhibits strong dimension-level / task-level generalization ability and interpretability.Comment: Accepted by ACL 2023 (Main Conference
    • …
    corecore