1 research outputs found
LiteMuL: A Lightweight On-Device Sequence Tagger using Multi-task Learning
Named entity detection and Parts-of-speech tagging are the key tasks for many
NLP applications. Although the current state of the art methods achieved near
perfection for long, formal, structured text there are hindrances in deploying
these models on memory-constrained devices such as mobile phones. Furthermore,
the performance of these models is degraded when they encounter short,
informal, and casual conversations. To overcome these difficulties, we present
LiteMuL - a lightweight on-device sequence tagger that can efficiently process
the user conversations using a Multi-Task Learning (MTL) approach. To the best
of our knowledge, the proposed model is the first on-device MTL neural model
for sequence tagging. Our LiteMuL model is about 2.39 MB in size and achieved
an accuracy of 0.9433 (for NER), 0.9090 (for POS) on the CoNLL 2003 dataset.
The proposed LiteMuL not only outperforms the current state of the art results
but also surpasses the results of our proposed on-device task-specific models,
with accuracy gains of up to 11% and model-size reduction by 50%-56%. Our model
is competitive with other MTL approaches for NER and POS tasks while outshines
them with a low memory footprint. We also evaluated our model on custom-curated
user conversations and observed impressive results.Comment: Published in 2021 IEEE 15th International Conference on Semantic
Computing (ICSC); Candidate for Best Paper Awar