For a sequence of classification tasks that arrive over time, it is common
that tasks are evolving in the sense that consecutive tasks often have a higher
similarity. The incremental learning of a growing sequence of tasks holds
promise to enable accurate classification even with few samples per task by
leveraging information from all the tasks in the sequence (forward and backward
learning). However, existing techniques developed for continual learning and
concept drift adaptation are either designed for tasks with time-independent
similarities or only aim to learn the last task in the sequence. This paper
presents incremental minimax risk classifiers (IMRCs) that effectively exploit
forward and backward learning and account for evolving tasks. In addition, we
analytically characterize the performance improvement provided by forward and
backward learning in terms of the tasks' expected quadratic change and the
number of tasks. The experimental evaluation shows that IMRCs can result in a
significant performance improvement, especially for reduced sample sizes