110 research outputs found
Control of RelB during dendritic cell activation integrates canonical and noncanonical NF-κB pathways.
The NF-κB protein RelB controls dendritic cell (DC) maturation and may be targeted therapeutically to manipulate T cell responses in disease. Here we report that RelB promoted DC activation not as the expected RelB-p52 effector of the noncanonical NF-κB pathway, but as a RelB-p50 dimer regulated by canonical IκBs, IκBα and IκBɛ. IκB control of RelB minimized spontaneous maturation but enabled rapid pathogen-responsive maturation. Computational modeling of the NF-κB signaling module identified control points of this unexpected cell type-specific regulation. Fibroblasts that we engineered accordingly showed DC-like RelB control. Canonical pathway control of RelB regulated pathogen-responsive gene expression programs. This work illustrates the potential utility of systems analyses in guiding the development of combination therapeutics for modulating DC-dependent T cell responses
Contribution of citizen science towards international biodiversity monitoring
To meet collective obligations towards biodiversity conservation and monitoring, it is essential that the world's governments and non-governmental organisations as well as the research community tap all possible sources of data and information, including new, fast-growing sources such as citizen science (CS), in which volunteers participate in some or all aspects of environmental assessments. Through compilation of a database on CS and community-based monitoring (CBM, a subset of CS) programs, we assess where contributions from CS and CBM are significant and where opportunities for growth exist. We use the Essential Biodiversity Variable framework to describe the range of biodiversity data needed to track progress towards global biodiversity targets, and we assess strengths and gaps in geographical and taxonomic coverage. Our results show that existing CS and CBM data particularly provide large-scale data on species distribution and population abundance, species traits such as phenology, and ecosystem function variables such as primary and secondary productivity. Only birds, Lepidoptera and plants are monitored at scale. Most CS schemes are found in Europe, North America, South Africa, India, and Australia. We then explore what can be learned from successful CS/CBM programs that would facilitate the scaling up of current efforts, how existing strengths in data coverage can be better exploited, and the strategies that could maximise the synergies between CS/CBM and other approaches for monitoring biodiversity, in particular from remote sensing. More and better targeted funding will be needed, if CS/CBM programs are to contribute further to international biodiversity monitoring
Assessing the conservation value of waterbodies: the example of the Loire floodplain (France)
In recent decades, two of the main management tools used to stem biodiversity erosion have been biodiversity monitoring and the conservation of natural areas. However, socio-economic pressure means that it is not usually possible to preserve the entire landscape, and so the rational prioritisation of sites has become a crucial issue. In this context, and because floodplains are one of the most threatened ecosystems, we propose a statistical strategy for evaluating conservation value, and used it to prioritise 46 waterbodies in the Loire floodplain (France). We began by determining a synthetic conservation index of fish communities (Q) for each waterbody. This synthetic index includes a conservation status index, an origin index, a rarity index and a richness index. We divided the waterbodies into 6 clusters with distinct structures of the basic indices. One of these clusters, with high Q median value, indicated that 4 waterbodies are important for fish biodiversity conservation. Conversely, two clusters with low Q median values included 11 waterbodies where restoration is called for. The results picked out high connectivity levels and low abundance of aquatic vegetation as the two main environmental characteristics of waterbodies with high conservation value. In addition, assessing the biodiversity and conservation value of
territories using our multi-index approach plus an a posteriori hierarchical classification methodology reveals two major interests: (i) a possible geographical extension and (ii) a multi-taxa adaptation
Achieving global biodiversity goals by 2050 requires urgent and integrated actions
Governments are negotiating actions intended to halt biodiversity loss and put it on a path to recovery by 2050. Here, we show that bending the curve for biodiversity is possible, but only if actions are implemented urgently and in an integrated manner. Connecting these actions to biodiversity outcomes and tracking progress remain a challenge
Building essential biodiversity variables (EBVs) of species distribution and abundance at a global scale
Much biodiversity data is collected worldwide, but it remains challenging to assemble the scattered knowledge for assessing biodiversity status and trends. The concept of Essential Biodiversity Variables (EBVs) was introduced to structure biodiversity monitoring globally, and to harmonize and standardize biodiversity data from disparate sources to capture a minimum set of critical variables required to study, report and manage biodiversity change. Here, we assess the challenges of a 'Big Data' approach to building global EBV data products across taxa and spatiotemporal scales, focusing on species distribution and abundance. The majority of currently available data on species distributions derives from incidentally reported observations or from surveys where presence-only or presence-absence data are sampled repeatedly with standardized protocols. Most abundance data come from opportunistic population counts or from population time series using standardized protocols (e.g. repeated surveys of the same population from single or multiple sites). Enormous complexity exists in integrating these heterogeneous, multi-source data sets across space, time, taxa and different sampling methods. Integration of such data into global EBV data products requires correcting biases introduced by imperfect detection and varying sampling effort, dealing with different spatial resolution and extents, harmonizing measurement units from different data sources or sampling methods, applying statistical tools and models for spatial inter- or extrapolation, and quantifying sources of uncertainty and errors in data and models. To support the development of EBVs by the Group on Earth Observations Biodiversity Observation Network (GEO BON), we identify 11 key workflow steps that will operationalize the process of building EBV data products within and across research infrastructures worldwide. These workflow steps take multiple sequential activities into account, including identification and aggregation of various raw data sources, data quality control, taxonomic name matching and statistical modelling of integrated data. We illustrate these steps with concrete examples from existing citizen science and professional monitoring projects, including eBird, the Tropical Ecology Assessment and Monitoring network, the Living Planet Index and the Baltic Sea zooplankton monitoring. The identified workflow steps are applicable to both terrestrial and aquatic systems and a broad range of spatial, temporal and taxonomic scales. They depend on clear, findable and accessible metadata, and we provide an overview of current data and metadata standards. Several challenges remain to be solved for building global EBV data products: (i) developing tools and models for combining heterogeneous, multi-source data sets and filling data gaps in geographic, temporal and taxonomic coverage, (ii) integrating emerging methods and technologies for data collection such as citizen science, sensor networks, DNA-based techniques and satellite remote sensing, (iii) solving major technical issues related to data product structure, data storage, execution of workflows and the production process/cycle as well as approaching technical interoperability among research infrastructures, (iv) allowing semantic interoperability by developing and adopting standards and tools for capturing consistent data and metadata, and (v) ensuring legal interoperability by endorsing open data or data that are free from restrictions on use, modification and sharing. Addressing these challenges is critical for biodiversity research and for assessing progress towards conservation policy targets and sustainable development goals
A Multiancestral Genome-Wide Exome Array Study of Alzheimer Disease, Frontotemporal Dementia, and Progressive Supranuclear Palsy
Importance Previous studies have indicated a heritable component of the etiology of neurodegenerative diseases such as Alzheimer disease (AD), frontotemporal dementia (FTD), and progressive supranuclear palsy (PSP). However, few have examined the contribution of low-frequency coding variants on a genome-wide level.
Objective To identify low-frequency coding variants that affect susceptibility to AD, FTD, and PSP.
Design, Setting, and Participants We used the Illumina HumanExome BeadChip array to genotype a large number of variants (most of which are low-frequency coding variants) in a cohort of patients with neurodegenerative disease (224 with AD, 168 with FTD, and 48 with PSP) and in 224 control individuals without dementia enrolled between 2005-2012 from multiple centers participating in the Genetic Investigation in Frontotemporal Dementia and Alzheimer’s Disease (GIFT) Study. An additional multiancestral replication cohort of 240 patients with AD and 240 controls without dementia was used to validate suggestive findings. Variant-level association testing and gene-based testing were performed.
Main Outcomes and Measures Statistical association of genetic variants with clinical diagnosis of AD, FTD, and PSP.
Results Genetic variants typed by the exome array explained 44%, 53%, and 57% of the total phenotypic variance of AD, FTD, and PSP, respectively. An association with the known AD gene ABCA7 was replicated in several ancestries (discovery P = .0049, European P = .041, African American P = .043, and Asian P = .027), suggesting that exonic variants within this gene modify AD susceptibility. In addition, 2 suggestive candidate genes, DYSF (P = 5.53 × 10−5) and PAXIP1 (P = 2.26 × 10−4), were highlighted in patients with AD and differentially expressed in AD brain. Corroborating evidence from other exome array studies and gene expression data points toward potential involvement of these genes in the pathogenesis of AD.
Conclusions and Relevance Low-frequency coding variants with intermediate effect size may account for a significant fraction of the genetic susceptibility to AD and FTD. Furthermore, we found evidence that coding variants in the known susceptibility gene ABCA7, as well as candidate genes DYSF and PAXIP1, confer risk for AD
CXCR2 Inhibition – a novel approach to treating CoronAry heart DiseAse (CICADA): study protocol for a randomised controlled trial
Abstract Background There is emerging evidence of the central role of neutrophils in both atherosclerotic plaque formation and rupture. Patients with lower neutrophil counts following acute coronary syndromes tend to have a greater coronary flow reserve, which is a strong predictor of long-term cardiovascular health. But so far, no data are available regarding the impact of neutrophil inhibition on cardiovascular clinical or surrogate endpoints. Therefore, the aim of this study is to investigate the effects of AZD5069, a cysteine-X-cysteine chemokine receptor 2 (CXCR2) inhibitor, on coronary flow reserve and coronary structure and function in patients with coronary artery disease. Methods/Design Ninety subjects with coronary artery disease undergoing percutaneous coronary intervention will be included in this investigator-driven, randomised, placebo-controlled, double-blind, phase IIa, single-centre study. Participants will be randomised to receive either AZD5069 (40Â mg) administered orally twice daily or placebo for 24Â weeks. Change in coronary flow reserve as determined by 13N-ammonia positron emission tomography-computed tomography will be the primary outcome. Change in the inflammatory component of coronary plaque structure and the backward expansion wave, an invasive coronary physiological measure of diastolic function, will be assessed as secondary outcomes. Discussion Cardiovascular surrogate parameters, such as coronary flow reserve, may provide insights into the potential mechanisms of the cardiovascular effects of CXCR2 inhibitors. Currently, ongoing trials do not specifically focus on neutrophil function as a target of intervention, and we therefore believe that our study will contribute to a better understanding of the role of neutrophil-mediated inflammation in coronary artery disease. Trial registration EudraCT, 2016-000775-24 . Registered on 22 July 2016. International Standard Randomised Controlled Trial Number, ISRCTN48328178 . Registered on 25 February 2016
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research
- …