175 research outputs found

    Regulation of adipose tissue metabolism by NFkB P65 in transgenic mice

    Get PDF
    Inflammation has been widely reported to regulate adipocyte functions in adipose tissue. Our early study suggests that NFkB signaling pathway is activated by inflammation and involved in inhibition of insulin sensitivity in adipocytes. NFkB was found to inhibit PPARg function through several possible mechanisms in 3T3-L1 adipocytes. To test this possibility in vivo, we increased the NFkB activity in adipocytes in transgenic mice by expression of NFkB p65 subunit under the aP2 gene promoter. The phenotype study shows that the food intake, physical activity and development are similar in the two groups. The reproductivity was not different in the two groups. However, the body weight gain and fat content increment are apparently less in the Tg mice, which was associated with a significant increase in energy expenditure and a defect in adipogenesis. Chronic inflammation was observed in the adipose tissue of Tg mice with macrophage infiltration and secretion of inflammatory cytokines. The data suggest that NFkB p65 inhibits PPARg function in adipose tissue, and prevents adulthood and diet-induced obesity. However, it does not provide benefit to the protection of systemic insulin sensitivity

    MVP: Multi-task Supervised Pre-training for Natural Language Generation

    Full text link
    Pre-trained language models (PLMs) have achieved remarkable success in natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an unsupervised manner using the large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with labeled data (i.e. "supervised pre-training") showcase superior performance compared to unsupervised pre-trained models. Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation. We collect a large-scale natural language generation corpus, MVPCorpus, from 7777 datasets over 1111 diverse NLG tasks. Then we unify these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner. For each task, we further pre-train specific soft prompts to stimulate the model's capacity to perform a specific task. Our MVP model can be seen as a practice that utilizes recent instruction tuning on relatively small PLMs. Extensive experiments have demonstrated the effectiveness and generality of our MVP model in a number of NLG tasks, which achieves state-of-the-art performance on 1313 out of 1717 datasets, outperforming BART by 9.3%9.3\% and Flan-T5 by 5.8%5.8\%.Comment: Accepted by ACL 202

    Rethinking 1D-CNN for Time Series Classification: A Stronger Baseline

    Full text link
    For time series classification task using 1D-CNN, the selection of kernel size is critically important to ensure the model can capture the right scale salient signal from a long time-series. Most of the existing work on 1D-CNN treats the kernel size as a hyper-parameter and tries to find the proper kernel size through a grid search which is time-consuming and is inefficient. This paper theoretically analyses how kernel size impacts the performance of 1D-CNN. Considering the importance of kernel size, we propose a novel Omni-Scale 1D-CNN (OS-CNN) architecture to capture the proper kernel size during the model learning period. A specific design for kernel size configuration is developed which enables us to assemble very few kernel-size options to represent more receptive fields. The proposed OS-CNN method is evaluated using the UCR archive with 85 datasets. The experiment results demonstrate that our method is a stronger baseline in multiple performance indicators, including the critical difference diagram, counts of wins, and average accuracy. We also published the experimental source codes at GitHub (https://github.com/Wensi-Tang/OS-CNN/)

    BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models

    Full text link
    Large language models (LLMs) have achieved dramatic proficiency over NLP tasks with normal length. Recently, multiple studies have committed to extending the context length and enhancing the long text modeling capabilities of LLMs. To comprehensively evaluate the long context ability of LLMs, we propose BAMBOO, a multi-task long context benchmark. BAMBOO has been designed with four principles: comprehensive capacity evaluation, avoidance of data contamination, accurate automatic evaluation, and different length levels. It consists of 10 datasets from 5 different long text understanding tasks, i.e. question answering, hallucination detection, text sorting, language modeling, and code completion, to cover core capacities and various domains of LLMs. We conduct experiments with five long context models on BAMBOO and further discuss four key research questions of long text. We also qualitatively analyze current long context models and point out future directions for enhancing long text modeling capacities. We release our data, prompts, and code at https://github.com/RUCAIBox/BAMBOO
    • …
    corecore