Composite indicators of countries performance are regularly used in benchmarking exercises or as policy-relevant interdisciplinary information tools in various fields such as economy, health, education or environment. They are calculated as weighted combinations of selected indicators via underlying models of the relevant policy domains. Yet, composite indicators can equally often stir controversies about the unavoidable subjectivity that is inherent in their construction. To this end, it is important that their sensitivity to the methodological assumptions be adequately tested to ensure that their methodology is sound and not susceptible to bias or significant sensitivity arising from data treatment, data quality [1,2], aggregation, or weighting [3,4]. In this presentation we use a combination of uncertainty and sensitivity analysis, coded in R programming language, to study how variations in the country scores derive from different sources of variation in the assumptions entailed in a composite indicator measuring the Knowledge Economy in the European Union. We focus on four major sources of uncertainty: (i) variation in the indicators ’ values due to imputation of missing data [5, 6], (ii) selection of weights, (iii) aggregation method (linear or geometric), and (iv) exclusion of one indicator ata-time
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.