367 research outputs found
Recommended from our members
Environmental Tobacco Smoke Exposure in Relation to Family Characteristics, Stressors and Chemical Co-Exposures in California Girls.
Childhood environmental tobacco smoke (ETS) exposure is a risk factor for adverse health outcomes and may disproportionately burden lower socioeconomic status groups, exacerbating health disparities. We explored associations of demographic factors, stressful life events, and chemical co-exposures, with cotinine levels, among girls in the CYGNET Study. Data were collected from families of girls aged 6-8 years old in Northern California, through clinic exams, questionnaires and biospecimens (n = 421). Linear regression and factor analysis were conducted to explore predictors of urinary cotinine and co-exposure body burdens, respectively. In unadjusted models, geometric mean cotinine concentrations were higher among Black (0.59 ug/g creatinine) than non-Hispanic white (0.27), Asian (0.32), or Hispanic (0.34) participants. Following adjustment, living in a rented home, lower primary caregiver education, and lack of two biologic parents in the home were associated with higher cotinine concentrations. Girls who experienced parental separation or unemployment in the family had higher unadjusted cotinine concentrations. Higher cotinine was also associated with higher polybrominated diphenyl ether and metals concentrations. Our findings have environmental justice implications as Black and socio-economically disadvantaged young girls experienced higher ETS exposure, also associated with higher exposure to other chemicals. Efforts to reduce ETS and co-exposures should account for other disparity-related factors
An Efficient, Highly Flexible Multi-Channel Digital Downconverter Architecture
In this innovation, a digital downconverter has been created that produces a large (16 or greater) number of output channels of smaller bandwidths. Additionally, this design has the flexibility to tune each channel independently to anywhere in the input bandwidth to cover a wide range of output bandwidths (from 32 MHz down to 1 kHz). Both the flexibility in channel frequency selection and the more than four orders of magnitude range in output bandwidths (decimation rates from 32 to 640,000) presented significant challenges to be solved. The solution involved breaking the digital downconversion process into a two-stage process. The first stage is a 2 oversampled filter bank that divides the whole input bandwidth as a real input signal into seven overlapping, contiguous channels represented with complex samples. Using the symmetry of the sine and cosine functions in a similar way to that of an FFT (fast Fourier transform), this downconversion is very efficient and gives seven channels fixed in frequency. An arbitrary number of smaller bandwidth channels can be formed from second-stage downconverters placed after the first stage of downconversion. Because of the overlapping of the first stage, there is no gap in coverage of the entire input bandwidth. The input to any of the second-stage downconverting channels has a multiplexer that chooses one of the seven wideband channels from the first stage. These second-stage downconverters take up fewer resources because they operate at lower bandwidths than doing the entire downconversion process from the input bandwidth for each independent channel. These second-stage downconverters are each independent with fine frequency control tuning, providing extreme flexibility in positioning the center frequency of a downconverted channel. Finally, these second-stage downconverters have flexible decimation factors over four orders of magnitude The algorithm was developed to run in an FPGA (field programmable gate array) at input data sampling rates of up to 1,280 MHz. The current implementation takes a 1,280-MHz real input, and first breaks it up into seven 160-MHz complex channels, each spaced 80 MHz apart. The eighth channel at baseband was not required for this implementation, and led to more optimization. Afterwards, 16 second stage narrow band channels with independently tunable center frequencies and bandwidth settings are implemented A future implementation in a larger Xilinx FPGA will hold up to 32 independent second-stage channels
Serum Renin and Major Adverse Kidney Events in Critically Ill Patients: A Multicenter Prospective Study
BACKGROUND: Preliminary studies have suggested that the renin-angiotensin system is activated in critical illness and associated with mortality and kidney outcomes. We sought to assess in a larger, multicenter study the relationship between serum renin and Major Adverse Kidney Events (MAKE) in intensive care unit (ICU) patients.
METHODS: Prospective, multicenter study at two institutions of patients with and without acute kidney injury (AKI). Blood samples were collected for renin measurement a median of 2 days into the index ICU admission and 5-7 days later. The primary outcome was MAKE at hospital discharge, a composite of mortality, kidney replacement therapy, or reduced estimated glomerular filtration rate to ≤ 75% of baseline.
RESULTS: Patients in the highest renin tertile were more severely ill overall, including more AKI, vasopressor-dependence, and severity of illness. MAKE were significantly greater in the highest renin tertile compared to the first and second tertiles. In multivariable logistic regression, this initial measurement of renin remained significantly associated with both MAKE as well as the individual component of mortality. The association of renin with MAKE in survivors was not statistically significant. Renin measurements at the second time point were also higher in patients with MAKE. The trajectory of the renin measurements between time 1 and 2 was distinct when comparing death versus survival, but not when comparing MAKE versus those without.
CONCLUSIONS: In a broad cohort of critically ill patients, serum renin measured early in the ICU admission is associated with MAKE at discharge, particularly mortality
Measuring the Refractive Index and Sub-Nanometre Surface Functionalisation of Nanoparticles in Suspension
Direct measurements to determine the degree of surface coverage of nanoparticles by functional moieties are rare, with current strategies requiring a high level of expertise and expensive equipment. Here, a practical method to determine the ratio of the volume of the functionalisation layer to the particle volume based on measuring the refractive index of nanoparticles in suspension is proposed. As a proof of concept, this technique is applied to poly(methyl methacrylate) (PMMA) nanoparticles and semicrystalline carbon dots functionalised with different surface moieties, yielding refractive indices that are commensurate to those from previous literature and Mie theory. In doing so, it is demonstrated that this technique is able to optically detect differences in surface functionalisation or composition of nanometre-sized particles. This non-destructive and rapid method is well-suited for in situ industrial particle characterisation and biological applications
Mendelian randomization study of B-type natriuretic peptide and type 2 diabetes: evidence of causal association from population studies
<p>Background: Genetic and epidemiological evidence suggests an inverse association between B-type natriuretic peptide (BNP) levels in blood and risk of type 2 diabetes (T2D), but the prospective association of BNP with T2D is uncertain, and it is unclear whether the association is confounded.</p>
<p>Methods and Findings: We analysed the association between levels of the N-terminal fragment of pro-BNP (NT-pro-BNP) in blood and risk of incident T2D in a prospective case-cohort study and genotyped the variant rs198389 within the BNP locus in three T2D case-control studies. We combined our results with existing data in a meta-analysis of 11 case-control studies. Using a Mendelian randomization approach, we compared the observed association between rs198389 and T2D to that expected from the NT-pro-BNP level to T2D association and the NT-pro-BNP difference per C allele of rs198389. In participants of our case-cohort study who were free of T2D and cardiovascular disease at baseline, we observed a 21% (95% CI 3%-36%) decreased risk of incident T2D per one standard deviation (SD) higher log-transformed NT-pro-BNP levels in analysis adjusted for age, sex, body mass index, systolic blood pressure, smoking, family history of T2D, history of hypertension, and levels of triglycerides, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol. The association between rs198389 and T2D observed in case-control studies (odds ratio = 0.94 per C allele, 95% CI 0.91-0.97) was similar to that expected (0.96, 0.93-0.98) based on the pooled estimate for the log-NT-pro-BNP level to T2D association derived from a meta-analysis of our study and published data (hazard ratio = 0.82 per SD, 0.74-0.90) and the difference in NT-pro-BNP levels (0.22 SD, 0.15-0.29) per C allele of rs198389. No significant associations were observed between the rs198389 genotype and potential confounders.</p>
<p>Conclusions: Our results provide evidence for a potential causal role of the BNP system in the aetiology of T2D. Further studies are needed to investigate the mechanisms underlying this association and possibilities for preventive interventions.</p>
Understanding and Addressing the Resilience Crisis of Europe’s Farming Systems
This chapter aims to synthesize key findings from the SURE-Farm project. We first discuss possible amendments to the framework to assess the resilience of farming systems. We then review why many of Europe’s farming systems face a formidable and structural resilience crisis. While emphasizing the diversity of resilience capacities, challenges and needs, we formulate cornerstones for possible resilience-enhancing strategies. The chapter concludes with critical reflections and suggestions for resilience-enhancing strategies that comprise the levels of farms, farming systems and enabling environments. We identify limitations of the research and suggest avenues for future research on the resilience of farming systems
Recommended from our members
ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries.
This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of "big data" (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA's activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors
Varespladib and cardiovascular events in patients with an acute coronary syndrome: the VISTA-16 randomized clinical trial
IMPORTANCE: Secretory phospholipase A2(sPLA2) generates bioactive phospholipid products implicated in atherosclerosis. The sPLA2inhibitor varespladib has favorable effects on lipid and inflammatory markers; however, its effect on cardiovascular outcomes is unknown. OBJECTIVE: To determine the effects of sPLA2inhibition with varespladib on cardiovascular outcomes. DESIGN, SETTING, AND PARTICIPANTS: A double-blind, randomized, multicenter trial at 362 academic and community hospitals in Europe, Australia, New Zealand, India, and North America of 5145 patients randomized within 96 hours of presentation of an acute coronary syndrome (ACS) to either varespladib (n = 2572) or placebo (n = 2573) with enrollment between June 1, 2010, and March 7, 2012 (study termination on March 9, 2012). INTERVENTIONS: Participants were randomized to receive varespladib (500 mg) or placebo daily for 16 weeks, in addition to atorvastatin and other established therapies. MAIN OUTCOMES AND MEASURES: The primary efficacy measurewas a composite of cardiovascular mortality, nonfatal myocardial infarction (MI), nonfatal stroke, or unstable angina with evidence of ischemia requiring hospitalization at 16 weeks. Six-month survival status was also evaluated. RESULTS: At a prespecified interim analysis, including 212 primary end point events, the independent data and safety monitoring board recommended termination of the trial for futility and possible harm. The primary end point occurred in 136 patients (6.1%) treated with varespladib compared with 109 patients (5.1%) treated with placebo (hazard ratio [HR], 1.25; 95%CI, 0.97-1.61; log-rank P = .08). Varespladib was associated with a greater risk of MI (78 [3.4%] vs 47 [2.2%]; HR, 1.66; 95%CI, 1.16-2.39; log-rank P = .005). The composite secondary end point of cardiovascular mortality, MI, and stroke was observed in 107 patients (4.6%) in the varespladib group and 79 patients (3.8%) in the placebo group (HR, 1.36; 95% CI, 1.02-1.82; P = .04). CONCLUSIONS AND RELEVANCE: In patients with recent ACS, varespladib did not reduce the risk of recurrent cardiovascular events and significantly increased the risk of MI. The sPLA2inhibition with varespladib may be harmful and is not a useful strategy to reduce adverse cardiovascular outcomes after ACS. TRIAL REGISTRATION: clinicaltrials.gov Identifier: NCT01130246. Copyright 2014 American Medical Association. All rights reserved
Mitochondrial calcium uniporter Mcu controls excitotoxicity and is transcriptionally repressed by neuroprotective nuclear calcium signals
The recent identification of the mitochondrial Ca(2+) uniporter gene (Mcu/Ccdc109a) has enabled us to address its role, and that of mitochondrial Ca(2+) uptake, in neuronal excitotoxicity. Here we show that exogenously expressed Mcu is mitochondrially localized and increases mitochondrial Ca(2+) levels following NMDA receptor activation, leading to increased mitochondrial membrane depolarization and excitotoxic cell death. Knockdown of endogenous Mcu expression reduces NMDA-induced increases in mitochondrial Ca(2+), resulting in lower levels of mitochondrial depolarization and resistance to excitotoxicity. Mcu is subject to dynamic regulation as part of an activity-dependent adaptive mechanism that limits mitochondrial Ca(2+) overload when cytoplasmic Ca(2+) levels are high. Specifically, synaptic activity transcriptionally represses Mcu, via a mechanism involving the nuclear Ca(2+) and CaM kinase-mediated induction of Npas4, resulting in the inhibition of NMDA receptor-induced mitochondrial Ca(2+) uptake and preventing excitotoxic death. This establishes Mcu and the pathways regulating its expression as important determinants of excitotoxicity, which may represent therapeutic targets for excitotoxic disorders
A framework to assess the resilience of farming systems
Agricultural systems in Europe face accumulating economic, ecological and societal challenges, raising concerns
about their resilience to shocks and stresses. These resilience issues need to be addressed with a focus on the
regional context in which farming systems operate because farms, farmers’ organizations, service suppliers and
supply chain actors are embedded in local environments and functions of agriculture. We define resilience of a farming
system as its ability to ensure the provision of the system functions in the face of increasingly complex and
accumulating economic, social, environmental and institutional shocks and stresses, through capacities of robustness,
adaptability and transformability. We (i) develop a framework to assess the resilience of farming systems, and (ii)
present a methodology to operationalize the framework with a view to Europe’s diverse farming systems. The
framework is designed to assess resilience to specific challenges (specified resilience) as well as a farming system’s
capacity to deal with the unknown, uncertainty and surprise (general resilience). The framework provides a heuristic to
analyze system properties, challenges (shocks, long-term stresses), indicators to measure the performance of system
functions, resilience capacities and resilience-enhancing attributes. Capacities and attributes refer to adaptive cycle
processes of agricultural practices, farm demographics, governance and risk management. The novelty of the
framework pertains to the focal scale of analysis, i.e. the farming system level, the consideration of accumulating
challenges and various agricultural processes, and the consideration that farming systems provide multiple functions
that can change over time. Furthermore, the distinction between three resilience capacities (robustness, adaptability,
transformability) ensures that the framework goes beyond narrow definitions that limit resilience to robustness. The
methodology deploys a mixed-methods approach: quantitative methods, such as statistics, econometrics and
modelling, are used to identify underlying patterns, causal explanations and likely contributing factors; while qualitative
methods, such as interviews, participatory approaches and stakeholder workshops, access experiential and contextual
knowledge and provide more nuanced insights. More specifically, analysis along the framework explores multiple
nested levels of farming systems (e.g. farm, farm household, supply chain, farming system) over a time horizon of 1-2
generations, thereby enabling reflection on potential temporal and scalar trade-offs across resilience attributes. The
richness of the framework is illustrated for the arable farming system in Veenkoloniën, the Netherlands. The analysis
reveals a relatively low capacity of this farming system to transform and farmers feeling distressed about
transformation, while other members of their households have experienced many examples of transformation
- …