49,783 research outputs found

    An extension to GUM methodology: degrees-of-freedom calculations for correlated multidimensional estimates

    Full text link
    The Guide to the Expression of Uncertainty in Measurement advocates the use of an 'effective number of degrees of freedom' for the calculation of an interval of measurement uncertainty. However, it does not describe how this number is to be calculated when (i) the measurand is a vector quantity or (ii) when the errors in the estimates of the quantities defining the measurand (the 'input quantities') are not incurred independently. An appropriate analysis for a vector-valued measurand has been described (Metrologia 39 (2002) 361-9), and a method for a one-dimensional measurand with dependent errors has also been given (Metrologia 44 (2007) 340-9). This paper builds on those analyses to present a method for the situation where the problem is multidimensional and involves correlated errors. The result is an explicit general procedure that reduces to simpler procedures where appropriate. The example studied is from the field of radio-frequency metrology, where measured quantities are often complex-valued and can be regarded as vectors of two elements.Comment: 30 pages with 2 embedded figure

    Reduced perplexity: Uncertainty measures without entropy

    Full text link
    Conference paper presented at Recent Advances in Info-Metrics, Washington, DC, 2014. Under review for a book chapter in "Recent innovations in info-metrics: a cross-disciplinary perspective on information and information processing" by Oxford University Press.A simple, intuitive approach to the assessment of probabilistic inferences is introduced. The Shannon information metrics are translated to the probability domain. The translation shows that the negative logarithmic score and the geometric mean are equivalent measures of the accuracy of a probabilistic inference. Thus there is both a quantitative reduction in perplexity as good inference algorithms reduce the uncertainty and a qualitative reduction due to the increased clarity between the original set of inferences and their average, the geometric mean. Further insight is provided by showing that the Renyi and Tsallis entropy functions translated to the probability domain are both the weighted generalized mean of the distribution. The generalized mean of probabilistic inferences forms a Risk Profile of the performance. The arithmetic mean is used to measure the decisiveness, while the -2/3 mean is used to measure the robustness

    The κ - μ shadowed fading model with arbitrary intercluster correlation

    Get PDF
    In this paper, we propose a generalization of the well-known κ-μ shadowed fading model. Based on the clustering of multipath waves as the baseline model, the novelty of this new distribution is the addition of an arbitrary correlation for the scattered components within each cluster. It also inherits the random fluctuation of the dominant component, which is assumed to be the same for all clusters. Thus, it unifies a wide variety of models: Rayleigh, Rician, Rician shadowed, Nakagami- m, κ-μ and κ-μ shadowed as well as multivariate Rayleigh, Rician and Rician shadowed. The main statistics of the newly proposed model, i.e. moment generating function, probability density function and cumulative density function, are given in terms of exponentials and powers, and some numerical results are provided in order to analyze the impact of the arbitrary intercluster correlation.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    AIC, Cp and estimators of loss for elliptically symmetric distributions

    Full text link
    In this article, we develop a modern perspective on Akaike's Information Criterion and Mallows' Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In this case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied
    corecore