7 research outputs found

    Antiinflammatory Properties of a Plant-Derived Nonsteroidal, Dissociated Glucocorticoid Receptor Modulator in Experimental Autoimmune Encephalomyelitis

    No full text
    Compound A (CpdA), a plant-derived phenyl aziridine precursor, was recently characterized as a fully dissociated nonsteroidal antiinflammatory agent, acting via activation of the glucocorticoid receptor, thereby down-modulating nuclear factor- B-mediated transactivation, but not supporting glucocorticoid response element-driven gene expression. The present study demonstrates the effectiveness of CpdA in inhibiting the disease progress in experimental autoimmune encephalomyelitis (EAE), a well-characterized animal model of multiple sclerosis. CpdA treatment of mice, both early and at the peak of the disease, markedly suppressed the clinical symptoms of EAE induced by myelin oligodendrocyte glycoprotein peptide immunization. Attenuation of the clinical symptoms of EAE by CpdA was accompanied by reduced leukocyte infiltration in the spinal cord, reduced expression of inflammatory cytokines and chemokines, and reduced neuronal damage and demyelination. In vivo CpdA therapy suppressed the encephalogenicity of myelin oligodendrocyte glycoprotein peptide-specific T cells. Moreover, CpdA was able to inhibit TNF- and lipopolysaccharide-induced nuclear factor- B activation in primary microglial cells in vitro, in a differential mechanistic manner as compared with dexamethasone. Finally, in EAE mice the therapeutic effect of CpdA, in contrast to that of dexamethasone, occurred in the absence of hyperinsulinemia and in the absence of a suppressive effect on the hypothalamic-pituitary-adrenal axis. Based on these results, we propose CpdA as a compound with promising antiinflammatory characteristics useful for therapeutic intervention in multiple sclerosis and other neuroinflammatory diseases

    Antiinflammatory Properties of a Plant-Derived Nonsteroidal, Dissociated Glucocorticoid Receptor Modulator in Experimental Autoimmune Encephalomyelitis

    No full text
    Compound A (CpdA), a plant-derived phenyl aziridine precursor, was recently characterized as a fully dissociated nonsteroidal antiinflammatory agent, acting via activation of the glucocorticoid receptor, thereby down-modulating nuclear factor- B-mediated transactivation, but not supporting glucocorticoid response element-driven gene expression. The present study demonstrates the effectiveness of CpdA in inhibiting the disease progress in experimental autoimmune encephalomyelitis (EAE), a well-characterized animal model of multiple sclerosis. CpdA treatment of mice, both early and at the peak of the disease, markedly suppressed the clinical symptoms of EAE induced by myelin oligodendrocyte glycoprotein peptide immunization. Attenuation of the clinical symptoms of EAE by CpdA was accompanied by reduced leukocyte infiltration in the spinal cord, reduced expression of inflammatory cytokines and chemokines, and reduced neuronal damage and demyelination. In vivo CpdA therapy suppressed the encephalogenicity of myelin oligodendrocyte glycoprotein peptide-specific T cells. Moreover, CpdA was able to inhibit TNF- and lipopolysaccharide-induced nuclear factor- B activation in primary microglial cells in vitro, in a differential mechanistic manner as compared with dexamethasone. Finally, in EAE mice the therapeutic effect of CpdA, in contrast to that of dexamethasone, occurred in the absence of hyperinsulinemia and in the absence of a suppressive effect on the hypothalamic-pituitary-adrenal axis. Based on these results, we propose CpdA as a compound with promising antiinflammatory characteristics useful for therapeutic intervention in multiple sclerosis and other neuroinflammatory diseases

    The EADC-ADNI Harmonized Protocol for manual hippocampal segmentation on magnetic resonance: Evidence of validity

    No full text
    BackgroundAn international Delphi panel has defined a harmonized protocol (HarP) for the manual segmentation of the hippocampus on MR. The aim of this study is to study the concurrent validity of the HarP toward local protocols, and its major sources of variance.MethodsFourteen tracers segmented 10 Alzheimer's Disease Neuroimaging Initiative (ADNI) cases scanned at 1.5 T and 3T following local protocols, qualified for segmentation based on the HarP through a standard web-platform and resegmented following the HarP. The five most accurate tracers followed the HarP to segment 15 ADNI cases acquired at three time points on both 1.5 T and 3T.ResultsThe agreement among tracers was relatively low with the local protocols (absolute left/right ICC 0.44/0.43) and much higher with the HarP (absolute left/right ICC 0.88/0.89). On the larger set of 15 cases, the HarP agreement within (left/right ICC range: 0.94/0.95 to 0.99/0.99) and among tracers (left/right ICC range: 0.89/0.90) was very high. The volume variance due to different tracers was 0.9% of the total, comparing favorably to variance due to scanner manufacturer (1.2), atrophy rates (3.5), hemispheric asymmetry (3.7), field strength (4.4), and significantly smaller than the variance due to atrophy (33.5%, P < .001), and physiological variability (49.2%, P < .001).ConclusionsThe HarP has high measurement stability compared with local segmentation protocols, and good reproducibility within and among human tracers. Hippocampi segmented with the HarP can be used as a reference for the qualification of human tracers and automated segmentation algorithms

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore