1 research outputs found
Multitask Learning for Sequence Labeling Tasks
In this paper, we present a learning method for sequence labeling tasks in
which each example sequence has multiple label sequences. Our method learns
multiple models, one model for each label sequence. Each model computes the
joint probability of all label sequences given the example sequence. Although
each model considers all label sequences, its primary focus is only one label
sequence, and therefore, each model becomes a task-specific model, for the task
belonging to that primary label. Such multiple models are learned {\it
simultaneously} by facilitating the learning transfer among models through {\it
explicit parameter sharing}. We experiment the proposed method on two
applications and show that our method significantly outperforms the
state-of-the-art method