While Large Language Models (LLMs) display versatile functionality, they
continue to generate harmful, biased, and toxic content, as demonstrated by the
prevalence of human-designed jailbreaks. In this work, we present Tree of
Attacks with Pruning (TAP), an automated method for generating jailbreaks that
only requires black-box access to the target LLM. TAP utilizes an LLM to
iteratively refine candidate (attack) prompts using tree-of-thought reasoning
until one of the generated prompts jailbreaks the target. Crucially, before
sending prompts to the target, TAP assesses them and prunes the ones unlikely
to result in jailbreaks. Using tree-of-thought reasoning allows TAP to navigate
a large search space of prompts and pruning reduces the total number of queries
sent to the target. In empirical evaluations, we observe that TAP generates
prompts that jailbreak state-of-the-art LLMs (including GPT4 and GPT4-Turbo)
for more than 80% of the prompts using only a small number of queries.
Interestingly, TAP is also capable of jailbreaking LLMs protected by
state-of-the-art guardrails, e.g., LlamaGuard. This significantly improves upon
the previous state-of-the-art black-box method for generating jailbreaks.Comment: An implementation of the presented method is available at
https://github.com/RICommunity/TA