Cross-domain NER is a challenging task to address the low-resource problem in
practical scenarios. Previous typical solutions mainly obtain a NER model by
pre-trained language models (PLMs) with data from a rich-resource domain and
adapt it to the target domain. Owing to the mismatch issue among entity types
in different domains, previous approaches normally tune all parameters of PLMs,
ending up with an entirely new NER model for each domain. Moreover, current
models only focus on leveraging knowledge in one general source domain while
failing to successfully transfer knowledge from multiple sources to the target.
To address these issues, we introduce Collaborative Domain-Prefix Tuning for
cross-domain NER (CP-NER) based on text-to-text generative PLMs. Specifically,
we present text-to-text generation grounding domain-related instructors to
transfer knowledge to new domain NER tasks without structural modifications. We
utilize frozen PLMs and conduct collaborative domain-prefix tuning to stimulate
the potential of PLMs to handle NER tasks across various domains. Experimental
results on the Cross-NER benchmark show that the proposed approach has flexible
transfer ability and performs better on both one-source and multiple-source
cross-domain NER tasks. Codes will be available in
https://github.com/zjunlp/DeepKE/tree/main/example/ner/cross.Comment: Work in progres