Learning is difficult to anticipate when it happen instantaneously, e.g. in the context of innovations [2]. However, even if learning is anticipated to happen continuously, it is difficult to grasp, e.g. when it occurs outside well-defined lab conditions, because adequate monitoring had not been put in place.
Our study is retrospective. It focuses on the emissions of greenhouse gases (GHGs)that had been reported by countries (Parties) under the Kyoto Protocol (KP) to the United Nations Framework on Climate Change (UNFCCC). Discussions range widely on (i) whether the KP is considered a failure [6] or a success [5] ; and (ii) whether international climate policy should transit from a centralized model of governance to a 'hybrid' decentralized approach that combines country-level mitigation pledges with common principles for accounting and monitoring [1] .
Emissions of GHGs - in the following we refer to CO2 emissions from burning fossil fuels at country level, particularly in the case of Austria - provide a perfect means to study learning in a globally relevant context. We are not aware of a similar data treasure of global relevance. Our mode of grasping learning is novel, i.e. it may have been referred to in general but, to the best of our knowledge, had not been quantifed so far. (That is, we consider the KP a success story potentially and advocate for the hybrid decentralized approach.)
Learning requires 'measuring' differences or deviations. Here we follow Marland et al. [3] who discuss this issue in the context of emissions accounting:
'Many of the countries and organizations that make estimates of CO2 emissions provide annual updates in which they add another year of data to the time series and revise the estimates for earlier years. Revisions may reflect revised or more complete energy data and ... more complete and detailed understanding of the emissions processes and emissions coefficients. In short, we expect revisions to reflect learning and a convergence toward more complete and accurate estimates.'
The United Nations Framework Convention on Climate Change (UNFCCC)requires exactly this to be done. Each year UNFCCC signatory countries are obliged to provide an annual inventory of emissions (and removals) of specified GHGs from five sectors (energy; industrial processes and product use; agriculture; land use, land use change and forestry; and waste) and revisit the emissions (and removals) for all previous years, back to the country specified base years (or periods). These data are made available by means of a database [4].
The time series of revised emission estimates reflect learning, but they are 'contaminated' by (i) structural change (e.g., when a coal-power plant is substituted by a gas-power plant); (ii) changes in consumption; and, rare but possible, (iii)methodological changes in surveying emission related activities. De-trending time series of revised emission estimates allows this contamination to be isolated by country, for which we provide three approaches: (I) parametric approach employing polynomial trend; (II) non-parametric approach employing smoothing splines; and (III) approach in which the most recent estimate is used as trend. That is, after de-trending for each year we are left with a set of revisions that reflect 'pure'(uncontaminated) learning which, is expected to be independent of the year under consideration (i.e., identical from year to year).
However, we are confronted with two non-negligible problems (P): (P.1) the problem of small numbers - the remaining differences in emissions are small (before and after de-trending); and (P.2) the problem of non-monotonic learning - our knowledge of emission-generating activities and emission factors may not become more accurate from revision to revision