This paper introduces self-paced task selection to multitask learning, where
instances from more closely related tasks are selected in a progression of
easier-to-harder tasks, to emulate an effective human education strategy, but
applied to multitask machine learning. We develop the mathematical foundation
for the approach based on iterative selection of the most appropriate task,
learning the task parameters, and updating the shared knowledge, optimizing a
new bi-convex loss function. This proposed method applies quite generally,
including to multitask feature learning, multitask learning with alternating
structure optimization, etc. Results show that in each of the above
formulations self-paced (easier-to-harder) task selection outperforms the
baseline version of these methods in all the experiments