66 research outputs found

    A bloodâ based nutritional risk index explains cognitive enhancement and decline in the multidomain Alzheimer prevention trial

    Full text link
    IntroductionMultinutrient approaches may produce more robust effects on brain health through interactive qualities. We hypothesized that a bloodâ based nutritional risk index (NRI) including three biomarkers of diet quality can explain cognitive trajectories in the multidomain Alzheimer prevention trial (MAPT) over 3â years.MethodsThe NRI included erythrocyte nâ 3 polyunsaturated fatty acids (nâ 3 PUFA 22:6nâ 3 and 20:5nâ 3), serum 25â hydroxyvitamin D, and plasma homocysteine. The NRI scores reflect the number of nutritional risk factors (0â 3). The primary outcome in MAPT was a cognitive composite Z score within each participant that was fit with linear mixedâ effects models.ResultsEighty percent had at lease one nutritional risk factor for cognitive decline (NRI â ¥1: 573 of 712). Participants presenting without nutritional risk factors (NRI=0) exhibited cognitive enhancement (β = 0.03 standard units [SU]/y), whereas each NRI point increase corresponded to an incremental acceleration in rates of cognitive decline (NRIâ 1: β = â 0.04 SU/y, P = .03; NRIâ 2: β = â 0.08 SU/y, P < .0001; and NRIâ 3: β = â 0.11 SU/y, P = .0008).DiscussionIdentifying and addressing these wellâ established nutritional risk factors may reduce ageâ related cognitive decline in older adults; an observation that warrants further study.Highlightsâ ¢Multiâ nutrient approaches may produce more robust effects through interactive propertiesâ ¢Nutritional risk index can objectively quantify nutritionâ related cognitive changesâ ¢Optimum nutritional status associated with cognitive enhancement over 3â yearsâ ¢Suboptimum nutritional status associated with cognitive decline over 3â yearsâ ¢Optimizing this nutritional risk index may promote cognitive health in older adultsPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/152935/1/trc2jtrci201911004.pd

    Increasing crop heterogeneity enhances multitrophic diversity across agricultural regions

    Get PDF
    International audienceAgricultural landscape homogenization has detrimental effects on biodiversity and key ecosystem services. Increasing agricultural landscape heterogeneity by increasing seminatural cover can help to mitigate biodiversity loss. However, the amount of seminatural cover is generally low and difficult to increase in many intensively managed agricultural landscapes. We hypothesized that increasing the heterogeneity of the crop mosaic itself (hereafter “crop heterogeneity”) can also have positive effects on biodiversity. In 8 contrasting regions of Europe and North America, we selected 435 landscapes along independent gradients of crop diversity and mean field size. Within each landscape, we selected 3 sampling sites in 1, 2, or 3 crop types. We sampled 7 taxa (plants, bees, butterflies, hoverflies, carabids, spiders, and birds) and calculated a synthetic index of multitrophic diversity at the landscape level. Increasing crop heterogeneity was more beneficial for multitrophic diversity than increasing seminatural cover. For instance, the effect of decreasing mean field size from 5 to 2.8 ha was as strong as the effect of increasing seminatural cover from 0.5 to 11%. Decreasing mean field size benefited multitrophic diversity even in the absence of seminatural vegetation between fields. Increasing the number of crop types sampled had a positive effect on landscape-level multitrophic diversity. However, the effect of increasing crop diversity in the landscape surrounding fields sampled depended on the amount of seminatural cover. Our study provides large-scale, multitrophic, cross-regional evidence that increasing crop heterogeneity can be an effective way to increase biodiversity in agricultural landscapes without taking land out of agricultural production

    Prescreening for European Prevention of Alzheimer Dementia (EPAD) trial-ready cohort: impact of AD risk factors and recruitment settings

    Get PDF
    Abstract: Background: Recruitment is often a bottleneck in secondary prevention trials in Alzheimer disease (AD). Furthermore, screen-failure rates in these trials are typically high due to relatively low prevalence of AD pathology in individuals without dementia, especially among cognitively unimpaired. Prescreening on AD risk factors may facilitate recruitment, but the efficiency will depend on how these factors link to participation rates and AD pathology. We investigated whether common AD-related factors predict trial-ready cohort participation and amyloid status across different prescreen settings. Methods: We monitored the prescreening in four cohorts linked to the European Prevention of Alzheimer Dementia (EPAD) Registry (n = 16,877; mean ± SD age = 64 ± 8 years). These included a clinical cohort, a research in-person cohort, a research online cohort, and a population-based cohort. Individuals were asked to participate in the EPAD longitudinal cohort study (EPAD-LCS), which serves as a trial-ready cohort for secondary prevention trials. Amyloid positivity was measured in cerebrospinal fluid as part of the EPAD-LCS assessment. We calculated participation rates and numbers needed to prescreen (NNPS) per participant that was amyloid-positive. We tested if age, sex, education level, APOE status, family history for dementia, memory complaints or memory scores, previously collected in these cohorts, could predict participation and amyloid status. Results: A total of 2595 participants were contacted for participation in the EPAD-LCS. Participation rates varied by setting between 3 and 59%. The NNPS were 6.9 (clinical cohort), 7.5 (research in-person cohort), 8.4 (research online cohort), and 88.5 (population-based cohort). Participation in the EPAD-LCS (n = 413 (16%)) was associated with lower age (odds ratio (OR) age = 0.97 [0.95–0.99]), high education (OR = 1.64 [1.23–2.17]), male sex (OR = 1.56 [1.19–2.04]), and positive family history of dementia (OR = 1.66 [1.19–2.31]). Among participants in the EPAD-LCS, amyloid positivity (33%) was associated with higher age (OR = 1.06 [1.02–1.10]) and APOE ɛ4 allele carriership (OR = 2.99 [1.81–4.94]). These results were similar across prescreen settings. Conclusions: Numbers needed to prescreen varied greatly between settings. Understanding how common AD risk factors link to study participation and amyloid positivity is informative for recruitment strategy of studies on secondary prevention of AD

    Familial hypercholesterolaemia in children and adolescents from 48 countries: a cross-sectional study

    Get PDF
    Background: Approximately 450 000 children are born with familial hypercholesterolaemia worldwide every year, yet only 2·1% of adults with familial hypercholesterolaemia were diagnosed before age 18 years via current diagnostic approaches, which are derived from observations in adults. We aimed to characterise children and adolescents with heterozygous familial hypercholesterolaemia (HeFH) and understand current approaches to the identification and management of familial hypercholesterolaemia to inform future public health strategies. Methods: For this cross-sectional study, we assessed children and adolescents younger than 18 years with a clinical or genetic diagnosis of HeFH at the time of entry into the Familial Hypercholesterolaemia Studies Collaboration (FHSC) registry between Oct 1, 2015, and Jan 31, 2021. Data in the registry were collected from 55 regional or national registries in 48 countries. Diagnoses relying on self-reported history of familial hypercholesterolaemia and suspected secondary hypercholesterolaemia were excluded from the registry; people with untreated LDL cholesterol (LDL-C) of at least 13·0 mmol/L were excluded from this study. Data were assessed overall and by WHO region, World Bank country income status, age, diagnostic criteria, and index-case status. The main outcome of this study was to assess current identification and management of children and adolescents with familial hypercholesterolaemia. Findings: Of 63 093 individuals in the FHSC registry, 11 848 (18·8%) were children or adolescents younger than 18 years with HeFH and were included in this study; 5756 (50·2%) of 11 476 included individuals were female and 5720 (49·8%) were male. Sex data were missing for 372 (3·1%) of 11 848 individuals. Median age at registry entry was 9·6 years (IQR 5·8-13·2). 10 099 (89·9%) of 11 235 included individuals had a final genetically confirmed diagnosis of familial hypercholesterolaemia and 1136 (10·1%) had a clinical diagnosis. Genetically confirmed diagnosis data or clinical diagnosis data were missing for 613 (5·2%) of 11 848 individuals. Genetic diagnosis was more common in children and adolescents from high-income countries (9427 [92·4%] of 10 202) than in children and adolescents from non-high-income countries (199 [48·0%] of 415). 3414 (31·6%) of 10 804 children or adolescents were index cases. Familial-hypercholesterolaemia-related physical signs, cardiovascular risk factors, and cardiovascular disease were uncommon, but were more common in non-high-income countries. 7557 (72·4%) of 10 428 included children or adolescents were not taking lipid-lowering medication (LLM) and had a median LDL-C of 5·00 mmol/L (IQR 4·05-6·08). Compared with genetic diagnosis, the use of unadapted clinical criteria intended for use in adults and reliant on more extreme phenotypes could result in 50-75% of children and adolescents with familial hypercholesterolaemia not being identified. Interpretation: Clinical characteristics observed in adults with familial hypercholesterolaemia are uncommon in children and adolescents with familial hypercholesterolaemia, hence detection in this age group relies on measurement of LDL-C and genetic confirmation. Where genetic testing is unavailable, increased availability and use of LDL-C measurements in the first few years of life could help reduce the current gap between prevalence and detection, enabling increased use of combination LLM to reach recommended LDL-C targets early in life

    Modeling functional relationships between morphogenetically active radiation and photosynthetic photon flux density in mango tree crown

    No full text
    International audienceLight is a key factor in plant ecophysiological modeling because of its crucial effects on plant growth and development. However, solar light quantity and quality change with environmental factors such as sky condition and solar elevation. When passing through a tree crown, light is modified by its interaction with the phytoelements, leaves and axes. This leads to a variability of light quantity and quality within the crown, with consequences on light-related processes such as photosynthesis and photomorphogenesis. We evaluated the effects of positional (depth within the crown) and environmental (sky condition, solar elevation) factors on light quantity and quality within the crown of the tropical evergreen mango tree. Functional relationships were modeled between morphogenetically active radiation variables that describe light quality [narrowband red ( Rn ), narrowband far-red ( FRn ), the ratio ζ=Rn : FRn , and UVA-blue ( UVA-BL )] and light quantity [photosynthetic photon flux density ( PPFD ) and relative transmitted PPFD ( TrPPFD )]. Light quantity and quality varied within the mango tree crown in a wide range similar to that of a forest. This variability was structured by the depth within the crown as well as by sky condition and solar elevation. Linear relationships linked Rn , FRn and UVA - BL to PPFD , and non-linear relationships linked ζ to TrPPFD . These relationships were strong, accurate and unbiased. They were affected by positional and environmental factors. The results suggested that these relationships were shaped by the characteristics of incident solar light and/or by the interactions between light and phytoelements. Two consequences of interest emerged from this research: i) the modeled relationships allow to infer light quality, that is difficult and time-consuming to simulate, from light quantity modeling within a tree crown, and ii) sky condition and solar elevation should be considered to improve light modeling within a tree crown

    Effectiveness of bacteriophages in the sputum of cystic fibrosis patients.

    Get PDF
    International audienceBacteriophages have been shown to be effective for treating acute infections of the respiratory tract caused by antibiotic-resistant bacteria in animal models, but no evidence has yet been presented of their activity against pathogens in complex biological samples from chronically infected patients. We assessed the efficacy of a cocktail of ten bacteriophages infecting Pseudomonas aeruginosa following its addition to 58 sputum samples from cystic fibrosis (CF) patients collected at three different hospitals. Ten samples that did not contain P. aeruginosa were not analysed further. In the remaining 48 samples, the addition of bacteriophages led to a significant decrease in the levels of P. aeruginosa strains, as shown by comparison with controls, taking two variables (time and bacteriophages) into account (p = 0.024). In 45.8% of these samples, this decrease was accompanied by an increase in the number of bacteriophages. We also tested each of the ten bacteriophages individually against 20 colonies from each of these 48 samples and detected bacteriophage-susceptible bacteria in 64.6% of the samples. An analysis of the clinical data revealed no correlation between patient age, sex, duration of P. aeruginosa colonization, antibiotic treatment, FEV1 (forced expiratory volume in the first second) and the efficacy of bacteriophages. The demonstration that bacteriophages infect their bacterial hosts in the sputum environment, regardless of the clinical characteristics of the patients, represents a major step towards the development of bacteriophage therapy to treat chronic lung infections

    Cost-Effectiveness of Drug-Eluting Stents in Elderly Patients With Coronary Artery Disease: The SENIOR Trial

    No full text
    International audienceBackground: Elderly patients receive bare metal stents instead of drug-eluting stents (DES) to shorten the duration of dual antiplatelet therapy (DAPT). The SENIOR trial compared outcomes between these 2 types of stents combined with a short duration of DAPT. A significant decrease in the number of patients with at least 1 major adverse cardiac and cerebrovascular event (MACCE) was noted in the DES group.Objectives: The objective of this article was to perform an economic evaluation of the SENIOR trial.Methods: This evaluation was performed separately in 5 participating countries using pooled patient-level data from all study patients and country-specific unit costs and utility values. Costs, MACCEs, and quality-adjusted life-years (QALYs) were calculated in both arms at 1 year, and an incremental cost-effectiveness ratio was estimated. Uncertainty was explored by probabilistic bootstrapping.Results: A total of 1200 patients underwent randomization. The average total cost per patient was higher in the DES group. The number of MACCEs and average QALYs were not statistically different between the 2 groups. The 1-year incremental cost-effectiveness ratio for each country of reference ranged from €13 752 to €20 511/MACCE avoided and from €42 835 to €68 231/QALY gained. The scatter plots found a wide dispersion, reflecting a large uncertainty surrounding the results. But in each country studied, 90% of the bootstrap replications indicated a higher cost for greater effectiveness for the DES group. Assuming a willingness to pay of €50 000/QALY, there was between a 40% and 50% chance that the use of DES was cost-effective in 4 countries.Conclusion: The use of DES instead of bare metal stents combined with a short duration of DAPT in elderly patients induced higher cost for greater effectiveness in each of the 5 countries studied
    corecore