3,718 research outputs found

    The International Volatility of Growth

    Get PDF
    Growth in the world economy is not shared equally among all countries, with some growing faster, some slower and some not at all. The cross-country distribution of growth is a useful tool for analysing the inequality of growth. The appropriately-weighted first moment of this distribution is world growth, while the second measures cross-country volatility. This paper introduces a methodology to examine the cross-country distribution of growth, and the components of its volatility. Using data from the Penn World Table, we find countries within geographic regions are seeing a harmonisation of growth, but between regions there is increasing dispersion.Growth, Cross-Country Distribution, Volatility

    Probabilistic Perspectives on Collecting Human Uncertainty in Predictive Data Mining

    Full text link
    In many areas of data mining, data is collected from humans beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built
    corecore