1 research outputs found
Exploring Backdoor Vulnerabilities of Chat Models
Recent researches have shown that Large Language Models (LLMs) are
susceptible to a security threat known as Backdoor Attack. The backdoored model
will behave well in normal cases but exhibit malicious behaviours on inputs
inserted with a specific backdoor trigger. Current backdoor studies on LLMs
predominantly focus on instruction-tuned LLMs, while neglecting another
realistic scenario where LLMs are fine-tuned on multi-turn conversational data
to be chat models. Chat models are extensively adopted across various
real-world scenarios, thus the security of chat models deserves increasing
attention. Unfortunately, we point out that the flexible multi-turn interaction
format instead increases the flexibility of trigger designs and amplifies the
vulnerability of chat models to backdoor attacks. In this work, we reveal and
achieve a novel backdoor attacking method on chat models by distributing
multiple trigger scenarios across user inputs in different rounds, and making
the backdoor be triggered only when all trigger scenarios have appeared in the
historical conversations. Experimental results demonstrate that our method can
achieve high attack success rates (e.g., over 90% ASR on Vicuna-7B) while
successfully maintaining the normal capabilities of chat models on providing
helpful responses to benign user requests. Also, the backdoor can not be easily
removed by the downstream re-alignment, highlighting the importance of
continued research and attention to the security concerns of chat models.
Warning: This paper may contain toxic content.Comment: Code and data are available at
https://github.com/hychaochao/Chat-Models-Backdoor-Attackin