4,501 research outputs found

    One SSD model, three HCp estimators: but which is better?

    Get PDF
    The species sensitivity distribution (SSD) model is firmly embedded in the regulatory arena as a method to derive the so-called ‘predicted no-effect concentration’ for a defined species assemblage exposed to a toxic stressor. The REACH technical guidance document (TGD) (ECHA, Guidance on Information Requirements and Chemical Safety Assessment), states that the log-normal SSD “is a pragmatic choice”, an assumption that has become commonplace in the ecotoxicological risk assessment community. The best way to fit a log-normal SSD for purposes of hazard assessment, on the other hand, is confusing. The sought-after quantity which intermediate (‘Level 2’ within the REACH TGD) risk assessments are based upon is the hazardous concentration to 5% of the defined species assemblage (the HC5). A standard approach is to estimate a median of the HC5 from the sampling distribution. This estimator has well understood statistical properties by construction. However, two alternative estimators - also based on a log-normal SSD - frequent the risk assessment literature. These estimators are constructed by least squares estimation of the ordered logarithmically transformed toxicity data modelled onto the corresponding plotting positions (cf. quantile plots). Standard hypothesis testing and diagnostics of the linear regression are inappropriate without further constraints (cf. generalized least squares). We consider evaluating which estimator, subject to the log-normality assumption, exhibits the best performance. The problem reduces to a fundamental problem of how to measure the performance of an estimator. This can be done by (1) ‘discrepancy’ between the estimator and ‘true’ value, or (2) ‘discrepancy’ between the true potentially affected fraction of species to the intended level. Evaluation of different ‘standard’ criteria (variance, bias, etc.) under the perspective of (1) indicates that the median estimator performs better for all reasonable samples sizes. For (2), the results concur on important scales of discrepancy. However, this performance is highly sensitive to the chosen criterion/scale and sample size. We conclude that the median estimator is preferable and that controversy could be overcome by a risk assessor reporting probabilistic distributions for risk managers in a Bayesian framework in addition to summary statistics; the median estimator is known to be a special case of this

    Species non-exchangeability for ecotoxicological risk assessment

    Get PDF
    In aquatic based chemical risk assessments, there is a wealth of statistical techniques for use in lower tier risk assessment. In particular, we focus on estimation of the hazardous concentration to x% of an ecological community (HCx); a concept based on the idea of Species Sensitivity Distribution (SSD). The SSD is typically assumed to act as a proxy distribution to model the inter-species variation in the biological assemblage. Over time, a number of criticisms have been made of the SSD concept, but we focus on one in particular – species non-exchangeability. The concept was first discussed within a semi-probabilistic setting by an opinion of the European Food Safety Authority (EFSA) Scientific Panel on Plant Production products and their Residues (EFSA Journal, 2005). We build on their findings to demonstrate, statistically, that the Rainbow trout (Oncorhynchus mykiss) is not exchangeable with other species. By this term, we mean that, a priori, before observing the toxicity value of the species, we do not believe it to be a realisation from the same distribution as the other species in the assemblage. In fact, the Rainbow trout is typically more sensitive than the average fish species across a wide range of substances. In addition, we briefly demonstrate how to exploit historical databases of toxicity data featuring the Rainbow trout to quantify this non-exchangeability in order to derive new estimators for the HCx

    Flavour symmetry breaking and meson masses

    Full text link
    The axial-vector Ward-Takahashi identity is used to derive mass formulae for neutral pseudoscalar mesons. Flavour symmetry breaking entails non-ideal flavour content for these states. Adding that the \eta^\prime is not a Goldstone mode, exact chiral-limit relations are developed from the identity. They connect the dressed-quark propagator to the topological susceptibility. It is confirmed that in the chiral limit the \eta^\prime mass is proportional to the matrix element which connects this state to the vacuum via the topological susceptibility. The implications of the mass formulae are illustrated using an elementary dynamical model, which includes an Ansatz for that part of the Bethe-Salpeter kernel related to the non-Abelian anomaly. In addition to the current-quark masses, the model involves two parameters, one of which is a mass-scale. It is employed in an analysis of pseudoscalar- and vector-meson bound-states. While the effects of SU(N_f=2) and SU(N_f=3) flavour symmetry breaking are emphasised, the five-flavour spectra are described. Despite its simplicity, the model is elucidative and phenomenologically efficacious; e.g., it predicts \eta-\eta^\prime mixing angles of ~ (-15 degrees) and \pi^0-\eta angles of ~ 1 degree.Comment: 11 pages, 2 figure

    Hierarchical Emulation: a method for modeling and comparing nested simulators

    Get PDF
    Computer simulators often contain options to include extensions, leading to different versions of a particular simulator with slightly different input spaces. We develop hierarchical emulation, a method for emulating such simulators and for learning about the differences between versions of a simulator. In an example using data from an ocean carbon cycle model, hierarchical emulators outperformed standard emulators both in their predictive accuracy and their coherence with the emulation model. The hierarchical emulator performed particularly well when a comparatively small amount of training data came from the extended simulator. This benefit of hierarchical emulation is advantageous when the extended simulator is costly to run compared to the simpler version

    On the application of loss functions for determining hazardous concentrations

    Get PDF
    The Hazardous Concentration to x% of an assemblage (HCx) of biological species is the environmental concentration which for a randomly selected species from the assemblage yields an x% probability of violating the species’ toxicological endpoint. Probabilistic methods for estimating the HCx appeal to the probabilistic concept of Species Sensitivity Distributions (SSDs) – a statistical proxy description of interspecies variation within the assemblage. A commonly used estimator class, derived by Aldenberg and Jaworska (2000; Ecotoxicol Environ Saf 46: 1-18), appealed to classical sampling theory, but also coincided with a Bayesian estimator. Two popular estimators from the class are the 50% and 95% (one-sided) underestimate of the HCx. However, whilst choice of x can have ecological significance, choice of confidence remains arbitrary. We reduce the problem to a Bayesian decision theoretic one; and show that their estimator class is equivalent to Bayes Rules under a class of (a-) symmetric linear loss functions, parameterised by the relative cost of over-estimation to under-estimation. A loss function in this sense measures the ‘cost’, which needn’t be monetary, of over- and under-estimation of the HCx estimator. Bayes rules are estimators which minimise expected loss with respect to the posterior SSD – updated with respect to the toxicity data. This potentially opens the way for high-stakes realism to be incorporated into risk assessments. We propose an alternative loss function known as Scaled LINear Exponential (LINEX) which is non-linearly asymmetric in a precautionary way, such that overestimation and underestimation are punished at an exponential and linear rate respectively. We use this loss function to derive an alternative class of HCx estimators

    Essence of the vacuum quark condensate

    Full text link
    We show that the chiral-limit vacuum quark condensate is qualitatively equivalent to the pseudoscalar meson leptonic decay constant in the sense that they are both obtained as the chiral-limit value of well-defined gauge-invariant hadron-to-vacuum transition amplitudes that possess a spectral representation in terms of the current-quark mass. Thus, whereas it might sometimes be convenient to imagine otherwise, neither is essentially a constant mass-scale that fills all spacetime. This means, in particular, that the quark condensate can be understood as a property of hadrons themselves, which is expressed, for example, in their Bethe-Salpeter or light-front wavefunctions.Comment: 5 pages, 1 figur
    • 

    corecore