7,853 research outputs found

    On the additivity of preference aggregation methods

    Get PDF
    The paper reviews some axioms of additivity concerning ranking methods used for generalized tournaments with possible missing values and multiple comparisons. It is shown that one of the most natural properties, called consistency, has strong links to independence of irrelevant comparisons, an axiom judged unfavourable when players have different opponents. Therefore some directions of weakening consistency are suggested, and several ranking methods, the score, generalized row sum and least squares as well as fair bets and its two variants (one of them entirely new) are analysed whether they satisfy the properties discussed. It turns out that least squares and generalized row sum with an appropriate parameter choice preserve the relative ranking of two objects if the ranking problems added have the same comparison structure.Comment: 24 pages, 9 figure

    Ranking authors using fractional counting of citations : an axiomatic approach

    Get PDF
    This paper analyzes from an axiomatic point of view a recent proposal for counting citations: the value of a citation given by a paper is inversely proportional to the total number of papers it cites. This way of fractionally counting citations was suggested as a possible way to normalize citation counts between fields of research having different citation cultures. It belongs to the “citing-side” approach to normalization. We focus on the properties characterizing this way of counting citations when it comes to ranking authors. Our analysis is conducted within a formal framework that is more complex but also more realistic than the one usually adopted in most axiomatic analyses of this kind

    Optimal Belief Approximation

    Get PDF
    In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how "embarrassing" it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes.Comment: made improvements on the proof and the languag
    • …
    corecore