159 research outputs found
Transfer Learning for Sequence Labeling Using Source Model and Target Data
In this paper, we propose an approach for transferring the knowledge of a
neural model for sequence labeling, learned from the source domain, to a new
model trained on a target domain, where new label categories appear. Our
transfer learning (TL) techniques enable to adapt the source model using the
target data and new categories, without accessing to the source data. Our
solution consists in adding new neurons in the output layer of the target model
and transferring parameters from the source model, which are then fine-tuned
with the target data. Additionally, we propose a neural adapter to learn the
difference between the source and the target label distribution, which provides
additional important information to the target model. Our experiments on Named
Entity Recognition show that (i) the learned knowledge in the source model can
be effectively transferred when the target data contains new categories and
(ii) our neural adapter further improves such transfer.Comment: 9 pages, 4 figures, 3 tables, accepted paper in the Thirty-Third AAAI
Conference on Artificial Intelligence (AAAI-19
Designing Semantic Kernels as Implicit Superconcept Expansions
Recently, there has been an increased interest in the exploitation of background knowledge in the context of text mining tasks, especially text classification. At the same time, kernel-based learning algorithms like Support Vector Machines have become a dominant paradigm in the text mining community. Amongst other reasons, this is also due to their capability to achieve more accurate learning results by replacing standard linear kernel (bag-of-words) with customized kernel functions which incorporate additional apriori knowledge. In this paper we propose a new approach to the design of ‘semantic smoothing kernels’ by means of an implicit superconcept expansion using well-known measures of term similarity. The experimental evaluation on two different datasets indicates that our approach consistently improves performance in situations where (i) training data is scarce or (ii) the bag-ofwords representation is too sparse to build stable models when using the linear kernel
- …