161 research outputs found

    MicroRNA-141-3p mediates epithelial cell proliferation, apoptosis, and epithelial-mesenchymal transition and alleviates pulmonary fibrosis in mice via Spred2

    Get PDF
    Objective. This study probed the mechanism of microRNA (miR)-141-3p in the progression of pulmonary fibrosis (PF). Methods. Mice were intratracheally administered with bleomycin (BLM) to establish a PF mouse model. To investigate the effects of miR-141-3p/Spred2 on PF in mice, PF mice received tail vein injections with agomir-141-3p and/or adenovirus vectors overexpressing Spred2 one week after BLM treatment. Then, the pathological changes of lung tissues were analyzed with H&E and Masson’s trichrome staining, and hydroxyproline contents in lung tissues were measured. For cell experiments, after loss- and gain-of-function assays, the role of miR-141-3p/Spred2 in the apoptosis and viability of TGF-β1-stimulated MLE-12 cells was examined by flow cytometry and CCK-8 assay, respectively. miR-141-3p, Spred2, COl 1, and α-SMA expression was determined in cells and mice. Then, the binding of miR-141-3p to Spred2 was tested with a dualluciferase reporter assay. Results. There were abnormally upregulated Spred2 and downregulated miR-141-3p in lung tissues of PF mice. TGF-β1 decelerated viability and augmented apoptosis and COl 1 and α-SMA expression in MLE-12 cells. Spred2 knockdown diminished apoptosis and αSMA and COl 1 expression while enhancing proliferation in TGF-β1-treated MLE-12 cells. Mechanistically, Spred2 was a target gene of miR-1413p. miR-141-3p upregulation accelerated proliferation and repressed apoptosis and α-SMA and COl 1 expression in TGF-β1-treated MLE-12 cells, which was nullified by further overexpressing Spred2. miR-141-3p alleviated PF in mice by targeting Spred2. Conclusion. miR-141-3p negatively modulates Spred2 to promote proliferation and repress epithelialmesenchymal transition and apoptosis of epithelial cells, as well as ameliorating PF in mic

    Optimal treatment allocation for efficient policy evaluation in sequential decision making

    Get PDF
    A/B testing is critical for modern technological companies to evaluate the effectiveness of newly developed products against standard baselines. This paper studies optimal designs that aim to maximize the amount of information obtained from online experiments to estimate treatment effects accurately. We propose three optimal allocation strategies in a dynamic setting where treatments are sequentially assigned over time. These strategies are designed to minimize the variance of the treatment effect estimator when data follow a non-Markov decision process or a (time-varying) Markov decision process. We further develop estimation procedures based on existing off-policy evaluation (OPE) methods and conduct extensive experiments in various environments to demonstrate the effectiveness of the proposed methodologies. In theory, we prove the optimality of the proposed treatment allocation design and establish upper bounds for the mean squared errors of the resulting treatment effect estimator

    Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation

    Full text link
    The task of Question Generation over Knowledge Bases (KBQG) aims to convert a logical form into a natural language question. For the sake of expensive cost of large-scale question annotation, the methods of KBQG under low-resource scenarios urgently need to be developed. However, current methods heavily rely on annotated data for fine-tuning, which is not well-suited for few-shot question generation. The emergence of Large Language Models (LLMs) has shown their impressive generalization ability in few-shot tasks. Inspired by Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for reasoning, we formulate KBQG task as a reasoning problem, where the generation of a complete question is splitted into a series of sub-question generation. Our proposed prompting method KQG-CoT first retrieves supportive logical forms from the unlabeled data pool taking account of the characteristics of the logical form. Then, we write a prompt to explicit the reasoning chain of generating complicated questions based on the selected demonstrations. To further ensure prompt quality, we extend KQG-CoT into KQG-CoT+ via sorting the logical forms by their complexity. We conduct extensive experiments over three public KBQG datasets. The results demonstrate that our prompting method consistently outperforms other prompting baselines on the evaluated datasets. Remarkably, our KQG-CoT+ method could surpass existing few-shot SoTA results of the PathQuestions dataset by 18.25, 10.72, and 10.18 absolute points on BLEU-4, METEOR, and ROUGE-L, respectively.Comment: Accepted by EMNLP 2023 main conferenc

    Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

    Full text link
    Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications. Previous paradigms either explore better scoring functions or utilize the knowledge of outliers to equip the models with the ability of OOD detection. However, few of them pay attention to the intrinsic OOD detection capability of the given model. In this work, we generally discover the existence of an intermediate stage of a model trained on in-distribution (ID) data having higher OOD detection performance than that of its final stage across different settings, and further identify one critical data-level attribution to be learning with the atypical samples. Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data. Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them. Extensive experiments and analysis demonstrate the effectiveness of our method. The code is available at: https://github.com/tmlr-group/Unleashing-Mask.Comment: accepted by ICML 202

    Poly[diaqua­(μ3-8-oxidoquinoline-5-sulfonato-κ4 N,O 8:O 5:O 8)nickel(II)]

    Get PDF
    In title compound, [Ni(C9H5NO4S)(H2O)2]n, the NiII atom is coordinated by one N atom and two bridging O atoms from two 8-oxidoquinoline-5-sulfonate ligands, one sulfonate O atom from a third ligand, and two water mol­ecules in a distorted octa­hedral geometry. The two NiII atoms are linked to each other through the bridging O atoms, forming a dimer. Adjacent dimers are connected through the coordination of the sulfonate O atom into a two-dimensional coordination network parallel to (010). Hydrogen bonds between the coordinated water mol­ecules and the uncoordinated O atoms of the sulfonate groups result in the construction of a three-dimensional supra­molecular structure

    Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation

    Full text link
    Out-of-distribution (OOD) detection is important for deploying reliable machine learning models on real-world applications. Recent advances in outlier exposure have shown promising results on OOD detection via fine-tuning model with informatively sampled auxiliary outliers. However, previous methods assume that the collected outliers can be sufficiently large and representative to cover the boundary between ID and OOD data, which might be impractical and challenging. In this work, we propose a novel framework, namely, Diversified Outlier Exposure (DivOE), for effective OOD detection via informative extrapolation based on the given auxiliary outliers. Specifically, DivOE introduces a new learning objective, which diversifies the auxiliary distribution by explicitly synthesizing more informative outliers for extrapolation during training. It leverages a multi-step optimization method to generate novel outliers beyond the original ones, which is compatible with many variants of outlier exposure. Extensive experiments and analyses have been conducted to characterize and demonstrate the effectiveness of the proposed DivOE. The code is publicly available at: https://github.com/tmlr-group/DivOE.Comment: accepted by NeurIPS 202
    corecore