As large language models are integrated into society, robustness toward a
suite of prompts is increasingly important to maintain reliability in a
high-variance environment.Robustness evaluations must comprehensively
encapsulate the various settings in which a user may invoke an intelligent
system. This paper proposes ASSERT, Automated Safety Scenario Red Teaming,
consisting of three methods -- semantically aligned augmentation, target
bootstrapping, and adversarial knowledge injection. For robust safety
evaluation, we apply these methods in the critical domain of AI safety to
algorithmically generate a test suite of prompts covering diverse robustness
settings -- semantic equivalence, related scenarios, and adversarial. We
partition our prompts into four safety domains for a fine-grained analysis of
how the domain affects model performance. Despite dedicated safeguards in
existing state-of-the-art models, we find statistically significant performance
differences of up to 11% in absolute classification accuracy among semantically
related scenarios and error rates of up to 19% absolute error in zero-shot
adversarial settings, raising concerns for users' physical safety.Comment: In Findings of the 2023 Conference on Empirical Methods in Natural
Language Processin