5 research outputs found
Improving the output quality of official statistics based on machine learning algorithms
National statistical institutes currently investigate how to improve the
output quality of official statistics based on machine learning algorithms. A
key obstacle is concept drift, i.e., when the joint distribution of independent
variables and a dependent (categorical) variable changes over time. Under
concept drift, a statistical model requires regular updating to prevent it from
becoming biased. However, updating a model asks for additional data, which are
not always available. In the literature, we find a variety of bias correction
methods as a promising solution. In the paper, we will compare two popular
correction methods: the misclassification estimator and the calibration
estimator. For prior probability shift (a specific type of concept drift), we
investigate the two correction methods theoretically as well as experimentally.
Our theoretical results are expressions for the bias and variance of both
methods. As experimental result, we present a decision boundary (as a function
of (a) model accuracy, (b) class distribution and (c) test set size) for the
relative performance of the two methods. Close inspection of the results will
provide a deep insight into the effect of prior probability shift on output
quality, leading to practical recommendations on the use of machine learning
algorithms in official statistics.Comment: 19 pages, 3 figures, submitted to the Journal of Official Statistics
on 14 December 202