6 research outputs found
Incremental learning algorithms and applications
International audienceIncremental learning refers to learning from streaming data, which arrive over time, with limited memory resources and, ideally, without sacrificing model accuracy. This setting fits different application scenarios where lifelong learning is relevant, e.g. due to changing environments , and it offers an elegant scheme for big data processing by means of its sequential treatment. In this contribution, we formalise the concept of incremental learning, we discuss particular challenges which arise in this setting, and we give an overview about popular approaches, its theoretical foundations, and applications which emerged in the last years
A Comparative Evaluation of Quantification Methods
Quantification represents the problem of predicting class distributions in a
given target set. It also represents a growing research field in supervised
machine learning, for which a large variety of different algorithms has been
proposed in recent years. However, a comprehensive empirical comparison of
quantification methods that supports algorithm selection is not available yet.
In this work, we close this research gap by conducting a thorough empirical
performance comparison of 24 different quantification methods. To consider a
broad range of different scenarios for binary as well as multiclass
quantification settings, we carried out almost 3 million experimental runs on
40 data sets. We observe that no single algorithm generally outperforms all
competitors, but identify a group of methods including the Median Sweep and the
DyS framework that perform significantly better in binary settings. For the
multiclass setting, we observe that a different, broad group of algorithms
yields good performance, including the Generalized Probabilistic Adjusted
Count, the readme method, the energy distance minimization method, the EM
algorithm for quantification, and Friedman's method. More generally, we find
that the performance on multiclass quantification is inferior to the results
obtained in the binary setting. Our results can guide practitioners who intend
to apply quantification algorithms and help researchers to identify
opportunities for future research
A review onquantification learning
The task of quantification consists in providing an aggregate estimation (e.g. the class distribution in a classification problem) for unseen test sets, applying a model that is trained using a training set with a different data distribution. Several real-world applications demand this kind of methods that do not require predictions for individual examples and just focus on obtaining accurate estimates at an aggregate level. During the past few years, several quantification methods have been proposed from different perspectives and with different goals. This paper presents a unified review of the main approaches with the aim of serving as an introductory tutorial for newcomers in the fiel