32 research outputs found
CSF1R inhibitor JNJ-40346527 attenuates microglial proliferation and neurodegeneration in P301S mice
Neuroinflammation and microglial activation are significant processes in Alzheimer's disease pathology. Recent genome-wide association studies have highlighted multiple immune-related genes in association with Alzheimer's disease, and experimental data have demonstrated microglial proliferation as a significant component of the neuropathology. In this study, we tested the efficacy of the selective CSF1R inhibitor JNJ-40346527 (JNJ-527) in the P301S mouse tauopathy model. We first demonstrated the anti-proliferative effects of JNJ-527 on microglia in the ME7 prion model, and its impact on the inflammatory profile, and provided potential CNS biomarkers for clinical investigation with the compound, including pharmacokinetic/pharmacodynamics and efficacy assessment by TSPO autoradiography and CSF proteomics. Then, we showed for the first time that blockade of microglial proliferation and modification of microglial phenotype leads to an attenuation of tau-induced neurodegeneration and results in functional improvement in P301S mice. Overall, this work strongly supports the potential for inhibition of CSF1R as a target for the treatment of Alzheimer's disease and other tau-mediated neurodegenerative diseases
Inflammatory biomarkers in Alzheimer's disease plasma
Introduction: Plasma biomarkers for Alzheimer's disease (AD) diagnosis/stratification are a \u201cHoly Grail\u201d of AD research and intensively sought; however, there are no well-established plasma markers. Methods: A hypothesis-led plasma biomarker search was conducted in the context of international multicenter studies. The discovery phase measured 53 inflammatory proteins in elderly control (CTL; 259), mild cognitive impairment (MCI; 199), and AD (262) subjects from AddNeuroMed. Results: Ten analytes showed significant intergroup differences. Logistic regression identified five (FB, FH, sCR1, MCP-1, eotaxin-1) that, age/APO\u3b54 adjusted, optimally differentiated AD and CTL (AUC: 0.79), and three (sCR1, MCP-1, eotaxin-1) that optimally differentiated AD and MCI (AUC: 0.74). These models replicated in an independent cohort (EMIF; AUC 0.81 and 0.67). Two analytes (FB, FH) plus age predicted MCI progression to AD (AUC: 0.71). Discussion: Plasma markers of inflammation and complement dysregulation support diagnosis and outcome prediction in AD and MCI. Further replication is needed before clinical translation
The Scientific Foundations of Forecasting Magnetospheric Space Weather
The magnetosphere is the lens through which solar space weather phenomena are focused and directed towards the Earth. In particular, the non-linear interaction of the solar wind with the Earth's magnetic field leads to the formation of highly inhomogenous electrical currents in the ionosphere which can ultimately result in damage to and problems with the operation of power distribution networks. Since electric power is the fundamental cornerstone of modern life, the interruption of power is the primary pathway by which space weather has impact on human activity and technology. Consequently, in the context of space weather, it is the ability to predict geomagnetic activity that is of key importance. This is usually stated in terms of geomagnetic storms, but we argue that in fact it is the substorm phenomenon which contains the crucial physics, and therefore prediction of substorm occurrence, severity and duration, either within the context of a longer-lasting geomagnetic storm, but potentially also as an isolated event, is of critical importance. Here we review the physics of the magnetosphere in the frame of space weather forecasting, focusing on recent results, current understanding, and an assessment of probable future developments.Peer reviewe
'Toxic' and 'Nontoxic': confirming critical terminology concepts and context for clear communication
If 'the dose makes the poison', and if the context of an exposure to a hazard shapes the risk as much as the innate character of the hazard itself, then what is 'toxic' and what is 'nontoxic'? This article is intended to help readers and communicators: anticipate that concepts such as 'toxic' and 'nontoxic' may have different meanings to different stakeholders in different contexts of general use, commerce, science, and the law; recognize specific situations in which terms and related information could potentially be misperceived or misinterpreted; evaluate the relevance, reliability, and other attributes of information for a given situation; control actions, assumptions, interpretations, conclusions, and decisions to avoid flaws and achieve a desired outcome; and confirm that the desired outcome has been achieved. To meet those objectives, we provide some examples of differing toxicology terminology concepts and contexts; a comprehensive decision-making framework for understanding and managing risk; along with a communication and education message and audience-planning matrix to support the involvement of all relevant stakeholders; a set of CLEAR-communication assessment criteria for use by both readers and communicators; example flaws in decision-making; a suite of three tools to assign relevance vs reliability, align know vs show, and refine perception vs reality aspects of information; and four steps to foster effective community involvement and support. The framework and supporting process are generally applicable to meeting any objective
Application of an informatics-based decision-making framework and process to the assessment of radiation safety in nanotechnology.
The National Council on Radiation Protection and Measurements (NCRP) established NCRP Scientific Committee 2-6 to develop a report on the current state of knowledge and guidance for radiation safety programs involved with nanotechnology. Nanotechnology is the understanding and control of matter at the nanoscale, at dimensions between ∼1 and 100 nm, where unique phenomena enable novel applications. While the full report is in preparation, this paper presents and applies an informatics-based decision-making framework and process through which the radiation protection community can anticipate that nano-enabled applications, processes, nanomaterials, and nanoparticles are likely to become present or are already present in radiation-related activities; recognize specific situations where environmental and worker safety, health, well-being, and productivity may be affected by nano-related activities; evaluate how radiation protection practices may need to be altered to improve protection; control information, interpretations, assumptions, and conclusions to implement scientifically sound decisions and actions; and confirm that desired protection outcomes have been achieved. This generally applicable framework and supporting process can be continuously applied to achieve health and safety at the convergence of nanotechnology and radiation-related activities
EEG Reactivity Evaluation Practices for Adult and Pediatric Hypoxic-Ischemic Coma Prognostication in North America
PURPOSE: The aim of this study was to assess the variability in EEG reactivity evaluation practices during cardiac arrest prognostication. METHODS: A survey of institutional representatives from North American academic hospitals participating in the Critical Care EEG Monitoring Research Consortium was conducted to assess practice patterns involving EEG reactivity evaluation. This 10-question multiple-choice survey evaluated metrics related to technical, interpretation, personnel, and procedural aspects of bedside EEG reactivity testing and interpretation specific to cardiac arrest prognostication. One response per hospital was obtained. RESULTS: Responses were received from 25 hospitals, including 7 pediatric hospitals. A standardized EEG reactivity protocol was available in 44% of centers. Sixty percent of respondents believed that reactivity interpretation was subjective. Reactivity bedside testing always (100%) started during hypothermia and was performed daily during monitoring in the majority (71%) of hospitals. Stimulation was performed primarily by neurodiagnostic technologists (76%). The mean number of activation procedures modalities tested was 4.5 (SD 2.1). The most commonly used activation procedures were auditory (83.3%), nail bed pressure (63%), and light tactile stimuli (63%). Changes in EEG amplitude alone were not considered consistent with EEG reactivity in 21% of centers. CONCLUSIONS: There is substantial variability in EEG reactivity evaluation practices during cardiac arrest prognostication among North American academic hospitals. Efforts are needed to standardize protocols and nomenclature according with national guidelines and promote best practices in EEG reactivity evaluation.SCOPUS: ar.jinfo:eu-repo/semantics/publishe
Quantitative Electroencephalogram Trends Predict Recovery in Hypoxic-Ischemic Encephalopathy
OBJECTIVES: Electroencephalogram features predict neurologic recovery following cardiac arrest. Recent work has shown that prognostic implications of some key electroencephalogram features change over time. We explore whether time dependence exists for an expanded selection of quantitative electroencephalogram features and whether accounting for this time dependence enables better prognostic predictions. DESIGN: Retrospective. SETTING: ICUs at four academic medical centers in the United States. PATIENTS: Comatose patients with acute hypoxic-ischemic encephalopathy.None. MEASUREMENTS AND MAIN RESULTS: We analyzed 12,397 hours of electroencephalogram from 438 subjects. From the electroencephalogram, we extracted 52 features that quantify signal complexity, category, and connectivity. We modeled associations between dichotomized neurologic outcome (good vs poor) and quantitative electroencephalogram features in 12-hour intervals using sequential logistic regression with Elastic Net regularization. We compared a predictive model using time-varying features to a model using time-invariant features and to models based on two prior published approaches. Models were evaluated for their ability to predict binary outcomes using area under the receiver operator curve, model calibration (how closely the predicted probability of good outcomes matches the observed proportion of good outcomes), and sensitivity at several common specificity thresholds of interest. A model using time-dependent features outperformed (area under the receiver operator curve, 0.83 ± 0.08) one trained with time-invariant features (0.79 ± 0.07; p < 0.05) and a random forest approach (0.74 ± 0.13; p < 0.05). The time-sensitive model was also the best-calibrated. CONCLUSIONS: The statistical association between quantitative electroencephalogram features and neurologic outcome changed over time, and accounting for these changes improved prognostication performance.SCOPUS: ar.jinfo:eu-repo/semantics/publishe