1 research outputs found
Interpretable MTL from Heterogeneous Domains using Boosted Tree
Multi-task learning (MTL) aims at improving the generalization performance of
several related tasks by leveraging useful information contained in them.
However, in industrial scenarios, interpretability is always demanded, and the
data of different tasks may be in heterogeneous domains, making the existing
methods unsuitable or unsatisfactory. In this paper, following the philosophy
of boosted tree, we proposed a two-stage method. In stage one, a common model
is built to learn the commonalities using the common features of all instances.
Different from the training of conventional boosted tree model, we proposed a
regularization strategy and an early-stopping mechanism to optimize the
multi-task learning process. In stage two, started by fitting the residual
error of the common model, a specific model is constructed with the
task-specific instances to further boost the performance. Experiments on both
benchmark and real-world datasets validate the effectiveness of the proposed
method. What's more, interpretability can be naturally obtained from the tree
based method, satisfying the industrial needs