2,322 research outputs found

    Molecular Mechanisms Behind the Chemopreventive Effects of Anthocyanidins

    Get PDF
    Anthocyanins are polyphenolic ring-based flavonoids, and are widespread in fruits and vegetables of red-blue color. Epidemiological investigations and animal experiments have indicated that anthocyanins may contribute to cancer chemoprevention. The studies on the mechanism have been done recently at molecular level. This review summarizes current molecular bases for anthocyanidins on several key steps involved in cancer chemoprevention: (i) inhibition of anthocyanidins in cell transformation through targeting mitogen-activated protein kinase (MAPK) pathway and activator protein 1 (AP-1) factor; (ii) suppression of anthocyanidins in inflammation and carcinogenesis through targeting nuclear factor kappa B (NF-κB) pathway and cyclooxygenase 2 (COX-2) gene; (iii) apoptotic induction of cancer cells by anthocyanidins through reactive oxygen species (ROS) / c-Jun NH(2)-terminal kinase (JNK)-mediated caspase activation. These data provide a first molecular view of anthocyanidins contributing to cancer chemoprevention

    Anti‐inflammatory effects and molecular mechanisms of 8‐prenyl quercetin

    Get PDF
    Scope: 8-prenyl quercetin (PQ) is a typical prenylflavonoid distributed in plant foods and shows higher potential bioactivity than its parent quercetin (Q) although the mechanisms are not fully understood. This study aims to clarify the anti-inflammatory effects and molecular mechanisms of PQ in cell and animal models, compared to Q. Methods and results: RAW264.7 cells were treated with PQ or Q to investigate the influence on the production of inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2) and protein kinases by Western blotting. Nitric oxide (NO) and prostaglandin E2 (PGE2) were measured by the Griess method and ELISA, respectively. Cytokines were assayed by the multiplex technology. Mouse paw edema was induced by lipopolysaccharide (LPS). The results revealed that PQ had stronger inhibition on the productions of iNOS, COX-2, NO, PGE2, and 12 kinds of cytokines, than Q. PQ also showed in vivo anti-inflammatory effect by attenuating mouse paw edema. Molecular data revealed that PQ had no competitive binding to Toll-like receptor 4 (TLR4) with LPS, but directly targeted SEK1-JNK1/2 and MEK1-ERK1/2. Conclusion: PQ as a potential inhibitor revealed anti-inflammatory effect in both cell and animal models at least by targeting SEK1-JNK1/2 and MEK1-ERK1/2

    Anti‐inflammatory effects and molecular mechanisms of 8‐prenyl quercetin

    Get PDF
    Scope: 8-prenyl quercetin (PQ) is a typical prenylflavonoid distributed in plant foods and shows higher potential bioactivity than its parent quercetin (Q) although the mechanisms are not fully understood. This study aims to clarify the anti-inflammatory effects and molecular mechanisms of PQ in cell and animal models, compared to Q. Methods and results: RAW264.7 cells were treated with PQ or Q to investigate the influence on the production of inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2) and protein kinases by Western blotting. Nitric oxide (NO) and prostaglandin E2 (PGE2) were measured by the Griess method and ELISA, respectively. Cytokines were assayed by the multiplex technology. Mouse paw edema was induced by lipopolysaccharide (LPS). The results revealed that PQ had stronger inhibition on the productions of iNOS, COX-2, NO, PGE2, and 12 kinds of cytokines, than Q. PQ also showed in vivo anti-inflammatory effect by attenuating mouse paw edema. Molecular data revealed that PQ had no competitive binding to Toll-like receptor 4 (TLR4) with LPS, but directly targeted SEK1-JNK1/2 and MEK1-ERK1/2. Conclusion: PQ as a potential inhibitor revealed anti-inflammatory effect in both cell and animal models at least by targeting SEK1-JNK1/2 and MEK1-ERK1/2

    DOMAIN: MilDly COnservative Model-BAsed OfflINe Reinforcement Learning

    Full text link
    Model-based reinforcement learning (RL), which learns environment model from offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. Therefore, this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm (DOMAIN) without estimating model uncertainty to address the above issues. DOMAIN introduces adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this paper, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms and has the guarantee of security policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms on the D4RL dataset benchmark, and achieves better performance than other RL algorithms on tasks that require generalization.Comment: 13 pages, 6 figure

    The density of macrophages in the invasive front is inversely correlated to liver metastasis in colon cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Although an abundance of evidence has indicated that tumor-associated macrophages (TAMs) are associated with a favorable prognosis in patients with colon cancer, it is still unknown how TAMs exert a protective effect. This study examined whether TAMs are involved in hepatic metastasis of colon cancer.</p> <p>Materials and methods</p> <p>One hundred and sixty cases of pathologically-confirmed specimens were obtained from colon carcinoma patients with TNM stage IIIB and IV between January 1997 and July 2004 at the Cancer Center of Sun Yat-Sen University. The density of macrophages in the invasive front (CD68TF<sub>Hotspot</sub>) was scored with an immunohistochemical assay. The relationship between the CD68TF<sub>Hotspot </sub>and the clinicopathologic parameters, the potential of hepatic metastasis, and the 5-year survival rate were analyzed.</p> <p>Results</p> <p>TAMs were associated with the incidence of hepatic metastasis and the 5-year survival rate in patients with colon cancers. Both univariate and multivariate analyses revealed that the CD68TF<sub>Hotspot </sub>was independently prognostic of survival. A higher 5-year survival rate among patients with stage IIIB after radical resection occurred in patients with a higher macrophage infiltration in the invasive front (81.0%) than in those with a lower macrophage infiltration (48.6%). Most importantly, the CD68TF<sub>Hotspot </sub>was associated with both the potential of hepatic metastasis and the interval between colon resection and the occurrence of hepatic metastasis.</p> <p>Conclusion</p> <p>This study showed evidence that TAMs infiltrated in the invasive front are associated with improvement in both hepatic metastasis and overall survival in colon cancer, implying that TAMs have protective potential in colon cancers and might serve as a novel therapeutic target.</p

    CROP: Conservative Reward for Model-based Offline Policy Optimization

    Full text link
    Offline reinforcement learning (RL) aims to optimize policy using collected data without online interactions. Model-based approaches are particularly appealing for addressing offline RL challenges due to their capability to mitigate the limitations of offline data through data generation using models. Prior research has demonstrated that introducing conservatism into the model or Q-function during policy optimization can effectively alleviate the prevalent distribution drift problem in offline RL. However, the investigation into the impacts of conservatism in reward estimation is still lacking. This paper proposes a novel model-based offline RL algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), which conservatively estimates the reward in model training. To achieve a conservative reward estimation, CROP simultaneously minimizes the estimation error and the reward of random actions. Theoretical analysis shows that this conservative reward mechanism leads to a conservative policy evaluation and helps mitigate distribution drift. Experiments on D4RL benchmarks showcase that the performance of CROP is comparable to the state-of-the-art baselines. Notably, CROP establishes an innovative connection between offline and online RL, highlighting that offline RL problems can be tackled by adopting online RL techniques to the empirical Markov decision process trained with a conservative reward. The source code is available with https://github.com/G0K0URURI/CROP.git
    corecore