168 research outputs found
Transfer Learning for Sequence Labeling Using Source Model and Target Data
In this paper, we propose an approach for transferring the knowledge of a
neural model for sequence labeling, learned from the source domain, to a new
model trained on a target domain, where new label categories appear. Our
transfer learning (TL) techniques enable to adapt the source model using the
target data and new categories, without accessing to the source data. Our
solution consists in adding new neurons in the output layer of the target model
and transferring parameters from the source model, which are then fine-tuned
with the target data. Additionally, we propose a neural adapter to learn the
difference between the source and the target label distribution, which provides
additional important information to the target model. Our experiments on Named
Entity Recognition show that (i) the learned knowledge in the source model can
be effectively transferred when the target data contains new categories and
(ii) our neural adapter further improves such transfer.Comment: 9 pages, 4 figures, 3 tables, accepted paper in the Thirty-Third AAAI
Conference on Artificial Intelligence (AAAI-19
Don't understand a measure? Learn it: Structured Prediction for Coreference Resolution optimizing its measures
n/
- …