1 research outputs found
Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic Conditional Random Fields
We compare different models for low resource multi-task sequence tagging that
leverage dependencies between label sequences for different tasks. Our analysis
is aimed at datasets where each example has labels for multiple tasks. Current
approaches use either a separate model for each task or standard multi-task
learning to learn shared feature representations. However, these approaches
ignore correlations between label sequences, which can provide important
information in settings with small training datasets. To analyze which
scenarios can profit from modeling dependencies between labels in different
tasks, we revisit dynamic conditional random fields (CRFs) and combine them
with deep neural networks. We compare single-task, multi-task and dynamic CRF
setups for three diverse datasets at both sentence and document levels in
English and German low resource scenarios. We show that including silver labels
from pretrained part-of-speech taggers as auxiliary tasks can improve
performance on downstream tasks. We find that especially in low-resource
scenarios, the explicit modeling of inter-dependencies between task predictions
outperforms single-task as well as standard multi-task models