Data valuation is a powerful framework for providing statistical insights
into which data are beneficial or detrimental to model training. Many
Shapley-based data valuation methods have shown promising results in various
downstream tasks, however, they are well known to be computationally
challenging as it requires training a large number of models. As a result, it
has been recognized as infeasible to apply to large datasets. To address this
issue, we propose Data-OOB, a new data valuation method for a bagging model
that utilizes the out-of-bag estimate. The proposed method is computationally
efficient and can scale to millions of data by reusing trained weak learners.
Specifically, Data-OOB takes less than 2.25 hours on a single CPU processor
when there are 106 samples to evaluate and the input dimension is 100.
Furthermore, Data-OOB has solid theoretical interpretations in that it
identifies the same important data point as the infinitesimal jackknife
influence function when two different points are compared. We conduct
comprehensive experiments using 12 classification datasets, each with thousands
of sample sizes. We demonstrate that the proposed method significantly
outperforms existing state-of-the-art data valuation methods in identifying
mislabeled data and finding a set of helpful (or harmful) data points,
highlighting the potential for applying data values in real-world applications.Comment: 18 pages, ICML 202