1 research outputs found
Tastle: Distract Large Language Models for Automatic Jailbreak Attack
Large language models (LLMs) have achieved significant advances in recent
days. Extensive efforts have been made before the public release of LLMs to
align their behaviors with human values. The primary goal of alignment is to
ensure their helpfulness, honesty and harmlessness. However, even meticulously
aligned LLMs remain vulnerable to malicious manipulations such as jailbreaking,
leading to unintended behaviors. The jailbreak is to intentionally develop a
malicious prompt that escapes from the LLM security restrictions to produce
uncensored detrimental contents. Previous works explore different jailbreak
methods for red teaming LLMs, yet they encounter challenges regarding to
effectiveness and scalability. In this work, we propose Tastle, a novel
black-box jailbreak framework for automated red teaming of LLMs. We designed
malicious content concealing and memory reframing with an iterative
optimization algorithm to jailbreak LLMs, motivated by the research about the
distractibility and over-confidence phenomenon of LLMs. Extensive experiments
of jailbreaking both open-source and proprietary LLMs demonstrate the
superiority of our framework in terms of effectiveness, scalability and
transferability. We also evaluate the effectiveness of existing jailbreak
defense methods against our attack and highlight the crucial need to develop
more effective and practical defense strategies