78 research outputs found
Generating Valid and Natural Adversarial Examples with Large Language Models
Deep learning-based natural language processing (NLP) models, particularly
pre-trained language models (PLMs), have been revealed to be vulnerable to
adversarial attacks. However, the adversarial examples generated by many
mainstream word-level adversarial attack models are neither valid nor natural,
leading to the loss of semantic maintenance, grammaticality, and human
imperceptibility. Based on the exceptional capacity of language understanding
and generation of large language models (LLMs), we propose LLM-Attack, which
aims at generating both valid and natural adversarial examples with LLMs. The
method consists of two stages: word importance ranking (which searches for the
most vulnerable words) and word synonym replacement (which substitutes them
with their synonyms obtained from LLMs). Experimental results on the Movie
Review (MR), IMDB, and Yelp Review Polarity datasets against the baseline
adversarial attack models illustrate the effectiveness of LLM-Attack, and it
outperforms the baselines in human and GPT-4 evaluation by a significant
margin. The model can generate adversarial examples that are typically valid
and natural, with the preservation of semantic meaning, grammaticality, and
human imperceptibility.Comment: Submitted to the IEEE for possible publicatio
- …