Recent advances in NLP are brought by a range of large-scale pretrained
language models (PLMs). These PLMs have brought significant performance gains
for a range of NLP tasks, circumventing the need to customize complex designs
for specific tasks. However, most current work focus on finetuning PLMs on a
domain-specific datasets, ignoring the fact that the domain gap can lead to
overfitting and even performance drop. Therefore, it is practically important
to find an appropriate method to effectively adapt PLMs to a target domain of
interest. Recently, a range of methods have been proposed to achieve this
purpose. Early surveys on domain adaptation are not suitable for PLMs due to
the sophisticated behavior exhibited by PLMs from traditional models trained
from scratch and that domain adaptation of PLMs need to be redesigned to take
effect. This paper aims to provide a survey on these newly proposed methods and
shed light in how to apply traditional machine learning methods to newly
evolved and future technologies. By examining the issues of deploying PLMs for
downstream tasks, we propose a taxonomy of domain adaptation approaches from a
machine learning system view, covering methods for input augmentation, model
optimization and personalization. We discuss and compare those methods and
suggest promising future research directions