1,533 research outputs found

    EXPOSURE TO NANOPARTICLES AND HORMESIS

    Get PDF
    Nanoparticles are particles with lengths that range from 1 to 100 nm. They are increasingly being manufactured and used for commercial purpose because of their novel and unique physicochemical properties. Although nanotechnology-based products are gener- ally thought to be at a pre-competitive stage, an increasing number of products and mate- rials are becoming commercially available. Human exposure to nanoparticles is therefore inevitable as they become more widely used and, as a result, nanotoxicology research is now gaining attention. However, there are many uncertainties as to whether the unique properties of nanoparticles also pose occupational health risks. These uncertainties arise because of gaps in knowledge about the factors that are essential for predicting health risks such as routes of exposure, distribution, accumulation, excretion and dose-response relationship of the nanoparticles. In particular, uncertainty remains with regard to the nature of the dose-response curve at low level exposures below the toxic threshold. In fact, in the literature, some studies that investigated the biological effects of nanoparticles, observed a hormetic dose-response. However, currently available data regarding this topic are extremely limited and fragmentary. It therefore seems clear that future studies need to focus on this issue by studying the potential adverse health effects caused by low-level exposures to nanoparticles

    Modeling Effective Dosages in Hormetic Dose-Response Studies

    Get PDF
    BACKGROUND: Two hormetic modifications of a monotonically decreasing log-logistic dose-response function are most often used to model stimulatory effects of low dosages of a toxicant in plant biology. As just one of these empirical models is yet properly parameterized to allow inference about quantities of interest, this study contributes the parameterized functions for the second hormetic model and compares the estimates of effective dosages between both models based on 23 hormetic data sets. Based on this, the impact on effective dosage estimations was evaluated, especially in case of a substantially inferior fit by one of the two models. METHODOLOGY/PRINCIPAL FINDINGS: The data sets evaluated described the hormetic responses of four different test plant species exposed to 15 different chemical stressors in two different experimental dose-response test designs. Out of the 23 data sets, one could not be described by any of the two models, 14 could be better described by one of the two models, and eight could be equally described by both models. In cases of misspecification by any of the two models, the differences between effective dosages estimates (0-1768%) greatly exceeded the differences observed when both models provided a satisfactory fit (0-26%). This suggests that the conclusions drawn depending on the model used may diverge considerably when using an improper hormetic model especially regarding effective dosages quantifying hormesis. CONCLUSIONS/SIGNIFICANCE: The study showed that hormetic dose responses can take on many shapes and that this diversity can not be captured by a single model without risking considerable misinterpretation. However, the two empirical models considered in this paper together provide a powerful means to model, prove, and now also to quantify a wide range of hormetic responses by reparameterization. Despite this, they should not be applied uncritically, but after statistical and graphical assessment of their adequacy

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Lack of correlation of stem cell markers in breast cancer stem cells

    Get PDF
    BACKGROUND: Various markers are used to identify the unique sub-population of breast cancer cells with stem cell properties. Whether these markers are expressed in all breast cancers, identify the same population of cells, or equate to therapeutic response is controversial. METHODS: We investigated the expression of multiple cancer stem cell markers in human breast cancer samples and cell lines in vitro and in vivo, comparing across and within samples and relating expression with growth and therapeutic response to doxorubicin, docetaxol and radiotherapy. RESULTS: CD24, CD44, ALDH and SOX2 expression, the ability to form mammospheres and side-population cells are variably present in human cancers and cell lines. Each marker identifies a unique rather than common population of cancer cells. In vivo, cells expressing these markers are not specifically localized to the presumptive stem cell niche at the tumour/stroma interface. Repeated therapy does not consistently enrich cells expressing these markers, although ER-negative cells accumulate. CONCLUSIONS: Commonly employed methods identify different cancer cell sub-populations with no consistent therapeutic implications, rather than a single population of cells. The relationships of breast cancer stem cells to clinical parameters will require identification of specific markers or panels for the individual cancer

    Detection of the pairwise kinematic Sunyaev-Zel'dovich effect with BOSS DR11 and the Atacama Cosmology Telescope

    Get PDF
    We present a new measurement of the kinematic Sunyaev-Zeldovich effect using data from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS). Using 600 square degrees of overlapping sky area, we evaluate the mean pairwise baryon momentum associated with the positions of 50,000 bright galaxies in the BOSS DR11 Large Scale Structure catalog. A non-zero signal arises from the large-scale motions of halos containing the sample galaxies. The data fits an analytical signal model well, with the optical depth to microwave photon scattering as a free parameter determining the overall signal amplitude. We estimate the covariance matrix of the mean pairwise momentum as a function of galaxy separation, using microwave sky simulations, jackknife evaluation, and bootstrap estimates. The most conservative simulation-based errors give signal-to-noise estimates between 3.6 and 4.1 for varying galaxy luminosity cuts. We discuss how the other error determinations can lead to higher signal-to-noise values, and consider the impact of several possible systematic errors. Estimates of the optical depth from the average thermal Sunyaev-Zeldovich signal at the sample galaxy positions are broadly consistent with those obtained from the mean pairwise momentum signal.Comment: 15 pages, 8 figures, 2 table

    Cigarette smoke-induced pulmonary emphysema in scid-mice. Is the acquired immune system required?

    Get PDF
    BACKGROUND: Chronic obstructive pulmonary disease is associated with a chronic inflammatory response of the host to chronic exposure to inhaled toxic gases and particles. Although inflammatory cells of both the innate and adaptive immune system infiltrate the lungs in pulmonary emphysema and form lymphoid follicles around the small airways, the exact role of the acquired immune system in the pathogenesis of emphysema is not known. METHODS: In this study, wild type Balb/c mice and immunodeficient scid mice – which lack functional B- and T-cells – were exposed to mainstream cigarette smoke (CS) for 5 weeks or 6 months. RESULTS: Subacute CS-exposure for 5 weeks significantly increased innate inflammatory cells (neutrophils, macrophages and dendritic cells) in the bronchoalveolar lavage (BAL) fluid of wild type mice and scid mice, which correlated with the CS-induced upregulation of the chemokines Monocyte Chemotactic Protein-1, Macrophage Inflammatory Protein-3α and KC (= mouse Interleukin-8). Chronic CS-exposure for 6 months significantly increased the number of neutrophils, macrophages, dendritic cells, CD4(+ )and CD8(+ )T-lymphocytes in BAL fluid and lungs of wild type mice compared to air-exposed littermates, and augmented the size and number of peribronchial lymphoid follicles. In contrast, neither B-lymphocytes, nor T-lymphocytes, nor lymphoid follicles could be discerned in the lungs of air- or CS-exposed scid mice. Importantly, chronic CS-exposure induced pulmonary emphysema in both wild type animals and scid mice, as evidenced by a significant increase in the mean linear intercept and the destructive index of CS-exposed versus air-exposed animals. The CS-induced emphysema was associated with increased mRNA expression of matrix metalloproteinase-12 in the lungs and increased protein levels of Tumor Necrosis Factor-α in the BAL fluid of CS-exposed Balb/c and scid mice compared to air-exposed littermates. CONCLUSION: This study suggests that the adaptive immune system is not required per se to develop pulmonary emphysema in response to chronic CS-exposure, since emphysema can be induced in scid mice, which lack lymphoid follicles as well as functional B- and T-cells

    Linking like with like: optimising connectivity between environmentally-similar habitats

    Get PDF
    Habitat fragmentation is one of the greatest threats to biodiversity. To minimise the effect of fragmentation on biodiversity, connectivity between otherwise isolated habitats should be promoted. However, the identification of linkages favouring connectivity is not trivial. Firstly, they compete with other land uses, so they need to be cost-efficient. Secondly, linkages for one species might be barriers for others, so they should effectively account for distinct mobility requirements. Thirdly, detailed information on the auto-ecology of most of the species is lacking, so linkages need being defined based on surrogates. In order to address these challenges we develop a framework that (a) identifies environmentally-similar habitats; (b) identifies environmental barriers (i.e., regions with a very distinct environment from the areas to be linked), and; (c) determines cost-efficient linkages between environmentally-similar habitats, free from environmental barriers. The assumption is that species with similar ecological requirements occupy the same environments, so environmental similarity provides a rationale for the identification of the areas that need to be linked. A variant of the classical minimum Steiner tree problem in graphs is used to address c). We present a heuristic for this problem that is capable of handling large datasets. To illustrate the framework we identify linkages between environmentally-similar protected areas in the Iberian Peninsula. The Natura 2000 network is used as a positive ‘attractor’ of links while the human footprint is used as ‘repellent’ of links.Wecompare the outcomes of our approach with cost-efficient networks linking protected areas that disregard the effect of environmental barriers. As expected, the latter achieved a smaller area covered with linkages, but with barriers that can significantly reduce the permeability of the landscape for the dispersal of some species

    High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide

    Get PDF
    Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (similar to 10(-3) bar) at 300 K and release it at similar to 450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 pi orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materialsopen

    Characteristic Evolution and Matching

    Get PDF
    I review the development of numerical evolution codes for general relativity based upon the characteristic initial value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.Comment: New version to appear in Living Reviews 2012. arXiv admin note: updated version of arXiv:gr-qc/050809
    corecore