324 research outputs found

    Genotoxic mixtures and dissimilar action: Concepts for prediction and assessment

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund. This article is distributed under the terms of the creative commons Attribution license which permits any use, distribution, and reproduction in any medium, provided the original author(s)and the source are credited.Combinations of genotoxic agents have frequently been assessed without clear assumptions regarding their expected (additive) mixture effects, often leading to claims of synergisms that might in fact be compatible with additivity. We have shown earlier that the combined effects of chemicals, which induce micronuclei (MN) in the cytokinesis-block micronucleus assay in Chinese hamster ovary-K1 cells by a similar mechanism, were additive according to the concept of concentration addition (CA). Here, we extended these studies and investigated for the first time whether valid additivity expectations can be formulated for MN-inducing chemicals that operate through a variety of mechanisms, including aneugens and clastogens (DNA cross-linkers, topoisomerase II inhibitors, minor groove binders). We expected that their effects should follow the additivity principles of independent action (IA). With two mixtures, one composed of various aneugens (colchicine, flubendazole, vinblastine sulphate, griseofulvin, paclitaxel), and another composed of aneugens and clastogens (flubendazole, doxorubicin, etoposide, melphalan and mitomycin C), we observed mixture effects that fell between the additivity predictions derived from CA and IA. We achieved better agreement between observation and prediction by grouping the chemicals into common assessment groups and using hybrid CA/IA prediction models. The combined effects of four dissimilarly acting compounds (flubendazole, paclitaxel, doxorubicin and melphalan) also fell within CA and IA. Two binary mixtures (flubendazole/paclitaxel and flubendazole/doxorubicin) showed effects in reasonable agreement with IA additivity. Our studies provide a systematic basis for the investigation of mixtures that affect endpoints of relevance to genotoxicity and show that their effects are largely additive.UK Food Standards Agenc

    Differential expression analysis for sequence count data

    Get PDF
    *Motivation:* High-throughput nucleotide sequencing provides quantitative readouts in assays for RNA expression (RNA-Seq), protein-DNA binding (ChIP-Seq) or cell counting (barcode sequencing). Statistical inference of differential signal in such data requires estimation of their variability throughout the dynamic range. When the number of replicates is small, error modelling is needed to achieve statistical power.

*Results:* We propose an error model that uses the negative binomial distribution, with variance and mean linked by local regression, to model the null distribution of the count data. The method controls type-I error and provides good detection power. 

*Availability:* A free open-source R software package, _DESeq_, is available from the Bioconductor project and from "http://www-huber.embl.de/users/anders/DESeq":http://www-huber.embl.de/users/anders/DESeq

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    A network-based target overlap score for characterizing drug combinations: High correlation with cancer clinical trial results

    Get PDF
    Drug combinations are highly efficient in systemic treatment of complex multigene diseases such as cancer, diabetes, arthritis and hypertension. Most currently used combinations were found in empirical ways, which limits the speed of discovery for new and more effective combinations. Therefore, there is a substantial need for efficient and fast computational methods. Here, we present a principle that is based on the assumption that perturbations generated by multiple pharmaceutical agents propagate through an interaction network and can cause unexpected amplification at targets not immediately affected by the original drugs. In order to capture this phenomenon, we introduce a novel Target Overlap Score (TOS) that is defined for two pharmaceutical agents as the number of jointly perturbed targets divided by the number of all targets potentially affected by the two agents. We show that this measure is correlated with the known effects of beneficial and deleterious drug combinations taken from the DCDB, TTD and Drugs.com databases. We demonstrate the utility of TOS by correlating the score to the outcome of recent clinical trials evaluating trastuzumab, an effective anticancer agent utilized in combination with anthracycline- and taxane-based systemic chemotherapy in HER2-receptor (erb-b2 receptor tyrosine kinase 2) positive breast cancer. © 2015 Ligeti et al

    Systems-pharmacology dissection of a drug synergy in imatinib-resistant CML

    Get PDF
    Occurrence of the BCR-ABL[superscript T315I] gatekeeper mutation is among the most pressing challenges in the therapy of chronic myeloid leukemia (CML). Several BCR-ABL inhibitors have multiple targets and pleiotropic effects that could be exploited for their synergistic potential. Testing combinations of such kinase inhibitors identified a strong synergy between danusertib and bosutinib that exclusively affected CML cells harboring BCR-ABL[superscript T315I]. To elucidate the underlying mechanisms, we applied a systems-level approach comprising phosphoproteomics, transcriptomics and chemical proteomics. Data integration revealed that both compounds targeted Mapk pathways downstream of BCR-ABL, resulting in impaired activity of c-Myc. Using pharmacological validation, we assessed that the relative contributions of danusertib and bosutinib could be mimicked individually by Mapk inhibitors and collectively by downregulation of c-Myc through Brd4 inhibition. Thus, integration of genome- and proteome-wide technologies enabled the elucidation of the mechanism by which a new drug synergy targets the dependency of BCR-ABL[superscript T315I] CML cells on c-Myc through nonobvious off targets

    Novel IgG-degrading enzymes of the IgdE protease family link substrate specificity to host tropism of <i>Streptococcus</i> species

    Get PDF
    Recently we have discovered an IgG degrading enzyme of the endemic pig pathogen S. suis designated IgdE that is highly specific for porcine IgG. This protease is the founding member of a novel cysteine protease family assigned C113 in the MEROPS peptidase database. Bioinformatical analyses revealed putative members of the IgdE protease family in eight other Streptococcus species. The genes of the putative IgdE family proteases of S. agalactiae, S. porcinus, S. pseudoporcinus and S. equi subsp. zooepidemicus were cloned for production of recombinant protein into expression vectors. Recombinant proteins of all four IgdE family proteases were proteolytically active against IgG of the respective Streptococcus species hosts, but not against IgG from other tested species or other classes of immunoglobulins, thereby linking the substrate specificity to the known host tropism. The novel IgdE family proteases of S. agalactiae, S. pseudoporcinus and S. equi showed IgG subtype specificity, i.e. IgdE from S. agalactiae and S. pseudoporcinus cleaved human IgG1, while IgdE from S. equi was subtype specific for equine IgG7. Porcine IgG subtype specificities of the IgdE family proteases of S. porcinus and S. pseudoporcinus remain to be determined. Cleavage of porcine IgG by IgdE of S. pseudoporcinus is suggested to be an evolutionary remaining activity reflecting ancestry of the human pathogen to the porcine pathogen S. porcinus. The IgG subtype specificity of bacterial proteases indicates the special importance of these IgG subtypes in counteracting infection or colonization and opportunistic streptococci neutralize such antibodies through expression of IgdE family proteases as putative immune evasion factors. We suggest that IgdE family proteases might be valid vaccine targets against streptococci of both human and veterinary medical concerns and could also be of therapeutic as well as biotechnological use

    Mapping Cumulative Environmental Risks: Examples from The EU NoMiracle Project

    Get PDF
    We present examples of cumulative chemical risk mapping methods developed within the NoMiracle project. The different examples illustrate the application of the concentration addition (CA) approach to pesticides at different scale, the integration in space of cumulative risks to individual organisms under the CA assumption, and two techniques to (1) integrate risks using data-driven, parametric statistical methods, and (2) cluster together areas with similar occurrence of different risk factors, respectively. The examples are used to discuss some general issues, particularly on the conventional nature of cumulative risk maps, and may provide some suggestions for the practice of cumulative risk mapping

    Modelling survival : exposure pattern, species sensitivity and uncertainty

    Get PDF
    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans

    A subset of platinum-containing chemotherapeutic agents kills cells by inducing ribosome biogenesis stress

    Get PDF
    Cisplatin and its platinum analogs, carboplatin and oxaliplatin, are some of the most widely used cancer chemotherapeutics. Although cisplatin and carboplatin are used primarily in germ cell, breast and lung malignancies, oxaliplatin is instead used almost exclusively to treat colorectal and other gastrointestinal cancers. Here we utilize a unique, multi-platform genetic approach to study the mechanism of action of these clinically established platinum anti-cancer agents, as well as more recently developed cisplatin analogs. We show that oxaliplatin, unlike cisplatin and carboplatin, does not kill cells through the DNA-damage response. Rather, oxaliplatin kills cells by inducing ribosome biogenesis stress. This difference in drug mechanism explains the distinct clinical implementation of oxaliplatin relative to cisplatin, and it might enable mechanistically informed selection of distinct platinum drugs for distinct malignancies. These data highlight the functional diversity of core components of front-line cancer therapy and the potential benefits of applying a mechanism-based rationale to the use of our current arsenal of anti-cancer drugs

    Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    Get PDF
    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model
    corecore