Most contemporary multi-task learning methods assume linear models. This
setting is considered shallow in the era of deep learning. In this paper, we
present a new deep multi-task representation learning framework that learns
cross-task sharing structure at every layer in a deep network. Our approach is
based on generalising the matrix factorisation techniques explicitly or
implicitly used by many conventional MTL algorithms to tensor factorisation, to
realise automatic learning of end-to-end knowledge sharing in deep networks.
This is in contrast to existing deep learning approaches that need a
user-defined multi-task sharing strategy. Our approach applies to both
homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our
deep multi-task representation learning in terms of both higher accuracy and
fewer design choices.Comment: 9 pages, Accepted to ICLR 2017 Conference Track. This is a conference
version of the paper. For the multi-domain learning part (not in this
version), please refer to https://arxiv.org/pdf/1605.06391v1.pd