We propose a framework for training multiple neural networks simultaneously.
The parameters from all models are regularised by the tensor trace norm, so
that each neural network is encouraged to reuse others' parameters if possible
-- this is the main motivation behind multi-task learning. In contrast to many
deep multi-task learning models, we do not predefine a parameter sharing
strategy by specifying which layers have tied parameters. Instead, our
framework considers sharing for all shareable layers, and the sharing strategy
is learned in a data-driven way.Comment: Submission to Workshop track - ICLR 201