189 research outputs found

    Involvement of HAND1 and CBS in maintenance of cardiac micro-architecture following obesity-induced heart failure

    Get PDF
    Purpose: To study the role of heart and neural crest derivatives expressed 1 (HAND1) and cystathionine-beta-synthase (CBS) in the maintenance of cardiac architecture following high fat dietinduced obesity. Methods: Mouse models of initial and critical heart disease were established by continuous feeding of high fat diet for 7 and 12 months, respectively. The expression of HAND1 and CBS were assayed using immunohistochemistry and Western blotting. Results: Obesity led to mild and severe forms of heart disease which were confirmed through histological imaging. Initial obesity resulted in cardiac tissue remodeling along with initial degeneration, while critical obesity resulted in tissue hardening. The expression of HAND1 was upregulated 4.3 folds in the mild form of cardiac failure, relative to marginal expression pattern of HAND1 in control tissue. However, as the disease progressed, the expression of HAND1 was limited in serve form of cardiac failure. Moreover, the expression of cystathionine beta-synthase (CBS) was upregulated 3.7-fold in the initial form of heart failure, but was subsequently reduced in serve form of heart disease. Conclusion: These results reveal that in high fat diet-induced cardiac stress, the over-expressions of HAND1 and CBS at the initial stages induce extensive alterations in cardiac architecture

    A Liver-Enriched Long Non-Coding RNA, lncLSTR, Regulates Systemic Lipid Metabolism in Mice

    Get PDF
    SummaryLong non-coding RNAs (lncRNAs) constitute a significant portion of mammalian genome, yet the physiological importance of lncRNAs is largely unknown. Here, we identify a liver-enriched lncRNA in mouse that we term liver-specific triglyceride regulator (lncLSTR). Mice with a liver-specific depletion of lncLSTR exhibit a marked reduction in plasma triglyceride levels. We show that lncLSTR depletion enhances apoC2 expression, leading to robust lipoprotein lipase activation and increased plasma triglyceride clearance. We further demonstrate that the regulation of apoC2 expression occurs through an FXR-mediated pathway. LncLSTR forms a molecular complex with TDP-43 to regulate expression of Cyp8b1, a key enzyme in the bile acid synthesis pathway, and engenders an in vivo bile pool that induces apoC2 expression through FXR. Finally, we demonstrate that lncLSTR depletion can reduce triglyceride levels in a hyperlipidemia mouse model. Taken together, these data support a model in which lncLSTR regulates a TDP-43/FXR/apoC2-dependent pathway to maintain systemic lipid homeostasis

    DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning

    Full text link
    Recent advances in natural language processing, primarily propelled by Large Language Models (LLMs), have showcased their remarkable capabilities grounded in in-context learning. A promising avenue for guiding LLMs in intricate reasoning tasks involves the utilization of intermediate reasoning steps within the Chain-of-Thought (CoT) paradigm. Nevertheless, the central challenge lies in the effective selection of exemplars for facilitating in-context learning. In this study, we introduce a framework that leverages Dual Queries and Low-rank approximation Re-ranking (DQ-LoRe) to automatically select exemplars for in-context learning. Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge. Moreover, for the second query, LoRe employs dimensionality reduction techniques to refine exemplar selection, ensuring close alignment with the input question's knowledge. Through extensive experiments, we demonstrate that DQ-LoRe significantly outperforms prior state-of-the-art methods in the automatic selection of exemplars for GPT-4, enhancing performance from 92.5% to 94.2%. Our comprehensive analysis further reveals that DQ-LoRe consistently outperforms retrieval-based approaches in terms of both performance and adaptability, especially in scenarios characterized by distribution shifts. DQ-LoRe pushes the boundary of in-context learning and opens up new avenues for addressing complex reasoning challenges. Our code is released at https://github.com/AI4fun/DQ-LoRe}{https://github.com/AI4fun/DQ-LoRe.Comment: Accepted in ICLR 202

    LEGO-Prover: Neural Theorem Proving with Growing Libraries

    Full text link
    Despite the success of large language models (LLMs), the task of theorem proving still remains one of the hardest reasoning tasks that is far from being fully solved. Prior methods using language models have demonstrated promising results, but they still struggle to prove even middle school level theorems. One common limitation of these methods is that they assume a fixed theorem library during the whole theorem proving process. However, as we all know, creating new useful theorems or even new theories is not only helpful but crucial and necessary for advancing mathematics and proving harder and deeper results. In this work, we present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving. By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process. These skills are further evolved (by prompting an LLM) to enrich the library on another scale. Modular and reusable skills are constantly added to the library to enable tackling increasingly intricate mathematical problems. Moreover, the learned library further bridges the gap between human proofs and formal proofs by making it easier to impute missing steps. LEGO-Prover advances the state-of-the-art pass rate on miniF2F-valid (48.0% to 57.0%) and miniF2F-test (45.5% to 47.1%). During the proving process, LEGO-Prover also manages to generate over 20,000 skills (theorems/lemmas) and adds them to the growing library. Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%. We also release our code and all the generated skills

    TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models

    Full text link
    Automated theorem proving (ATP) has become an appealing domain for exploring the reasoning ability of the recent successful generative language models. However, current ATP benchmarks mainly focus on symbolic inference, but rarely involve the understanding of complex number combination reasoning. In this work, we propose TRIGO, an ATP benchmark that not only requires a model to reduce a trigonometric expression with step-by-step proofs but also evaluates a generative LM's reasoning ability on formulas and its capability to manipulate, group, and factor number terms. We gather trigonometric expressions and their reduced forms from the web, annotate the simplification process manually, and translate it into the Lean formal language system. We then automatically generate additional examples from the annotated samples to expand the dataset. Furthermore, we develop an automatic generator based on Lean-Gym to create dataset splits of varying difficulties and distributions in order to thoroughly analyze the model's generalization ability. Our extensive experiments show our proposed TRIGO poses a new challenge for advanced generative LM's including GPT-4 which is pre-trained on a considerable amount of open-source formal theorem-proving language data, and provide a new tool to study the generative LM's ability on both formal and mathematical reasoning.Comment: Accepted by EMNLP 2023. Code is available at https://github.com/menik1126/TRIG

    Xin-Li-Fang efficacy and safety for patients with chronic heart failure: A study protocol for a randomized, double-blind, and placebo-controlled trial

    Get PDF
    IntroductionXin-Li-Fang (XLF), a representative Chinese patent medicine, was derived from years of clinical experience by academician Chen Keji, and is widely used to treat chronic heart failure (CHF). However, there remains a lack of high-quality evidence to support clinical decision-making. Therefore, we designed a randomized controlled trial (RCT) to evaluate the efficacy and safety of XLF for CHF.Methods and designThis multicenter, double-blinded RCT will be conducted in China. 300 eligible participants will be randomly assigned to either an XLF group or a control group at a 1:1 ratio. Participants in the XLF group will receive XLF granules plus routine care, while those in the control group will receive placebo granules plus routine care. The study period is 26 weeks, including a 2-week run-in period, a 12-week treatment period, and a 12-week follow-up. The primary outcome is the proportion of patients whose serum NT-proBNP decreased by more than 30%. The secondary outcomes include quality of life, the NYHA classification evaluation, 6-min walking test, TCM symptom evaluations, echocardiography parameters, and clinical events (including hospitalization for worsening heart failure, all-cause death, and other major cardiovascular events).DiscussionThe results of the study are expected to provide evidence of high methodological and reporting quality on the efficacy and safety of XLF for CHF.Clinical trial registrationChinese Clinical Trial Registration Center (www.chictr.org.cn). The trial was registered on 13 April 2022 (ChiCTR2200058649)

    A MAFG-lncRNA axis links systemic nutrient abundance to hepatic glucose metabolism

    Get PDF
    Obesity and type 2 diabetes mellitus are global emergencies and long noncoding RNAs (lncRNAs) are regulatory transcripts with elusive functions in metabolism. Here we show that a high fraction of lncRNAs, but not protein-coding mRNAs, are repressed during diet-induced obesity (DIO) and refeeding, whilst nutrient deprivation induced lncRNAs in mouse liver. Similarly, lncRNAs are lost in diabetic humans. LncRNA promoter analyses, global cistrome and gain-of-function analyses confirm that increased MAFG signaling during DIO curbs lncRNA expression. Silencing Mafg in mouse hepatocytes and obese mice elicits a fasting-like gene expression profile, improves glucose metabolism, de-represses lncRNAs and impairs mammalian target of rapamycin (mTOR) activation. We find that obesity-repressed LincIRS2 is controlled by MAFG and observe that genetic and RNAi-mediated LincIRS2 loss causes elevated blood glucose, insulin resistance and aberrant glucose output in lean mice. Taken together, we identify a MAFG-lncRNA axis controlling hepatic glucose metabolism in health and metabolic disease

    A MAFG-lncRNA axis links systemic nutrient abundance to hepatic glucose metabolism

    Get PDF
    Obesity and type 2 diabetes mellitus are global emergencies and long noncoding RNAs (lncRNAs) are regulatory transcripts with elusive functions in metabolism. Here we show that a high fraction of lncRNAs, but not protein-coding mRNAs, are repressed during diet-induced obesity (DIO) and refeeding, whilst nutrient deprivation induced lncRNAs in mouse liver. Similarly, lncRNAs are lost in diabetic humans. LncRNA promoter analyses, global cistrome and gain-of-function analyses confirm that increased MAFG signaling during DIO curbs lncRNA expression. Silencing Mafg in mouse hepatocytes and obese mice elicits a fasting-like gene expression profile, improves glucose metabolism, de-represses lncRNAs and impairs mammalian target of rapamycin (mTOR) activation. We find that obesity-repressed LincIRS2 is controlled by MAFG and observe that genetic and RNAi-mediated LincIRS2 loss causes elevated blood glucose, insulin resistance and aberrant glucose output in lean mice. Taken together, we identify a MAFG-lncRNA axis controlling hepatic glucose metabolism in health and metabolic disease
    corecore