Multi-task learning (MTL) aims at enhancing the performance and efficiency of
machine learning models by training them on multiple tasks simultaneously.
However, MTL research faces two challenges: 1) modeling the relationships
between tasks to effectively share knowledge between them, and 2) jointly
learning task-specific and shared knowledge. In this paper, we present a novel
model Adaptive Task-to-Task Fusion Network (AdaTT) to address both challenges.
AdaTT is a deep fusion network built with task specific and optional shared
fusion units at multiple levels. By leveraging a residual mechanism and gating
mechanism for task-to-task fusion, these units adaptively learn shared
knowledge and task specific knowledge. To evaluate the performance of AdaTT, we
conduct experiments on a public benchmark and an industrial recommendation
dataset using various task groups. Results demonstrate AdaTT can significantly
outperform existing state-of-the-art baselines