Personality plays a pivotal role in shaping human expression patterns, thus
regulating the personality of large language models (LLMs) holds significant
potential in enhancing the user experience of LLMs. Previous methods either
relied on fine-tuning LLMs on specific corpora or necessitated manually crafted
prompts to elicit specific personalities from LLMs. However, the former
approach is inefficient and costly, while the latter cannot precisely
manipulate personality traits at a fine-grained level. To address the above
challenges, we have employed a novel Unsupervisedly-Built Personalized Lexicons
(UBPL) in a pluggable manner during the decoding phase of LLMs to manipulate
their personality traits. UBPL is a lexicon built through an unsupervised
approach from a situational judgment test dataset (SJTs4LLM). Users can utilize
UBPL to adjust the probability vectors of predicted words in the decoding phase
of LLMs, thus influencing the personality expression of LLMs. Extensive
experimentation demonstrates the remarkable effectiveness and pluggability of
our method for fine-grained manipulation of LLM's personality.Comment: Work in progres