Review-Based Recommender Systems (RBRS) have attracted increasing research
interest due to their ability to alleviate well-known cold-start problems. RBRS
utilizes reviews to construct the user and items representations. However, in
this paper, we argue that such a reliance on reviews may instead expose systems
to the risk of being shilled. To explore this possibility, in this paper, we
propose the first generation-based model for shilling attacks against RBRSs.
Specifically, we learn a fake review generator through reinforcement learning,
which maliciously promotes items by forcing prediction shifts after adding
generated reviews to the system. By introducing the auxiliary rewards to
increase text fluency and diversity with the aid of pre-trained language models
and aspect predictors, the generated reviews can be effective for shilling with
high fidelity. Experimental results demonstrate that the proposed framework can
successfully attack three different kinds of RBRSs on the Amazon corpus with
three domains and Yelp corpus. Furthermore, human studies also show that the
generated reviews are fluent and informative. Finally, equipped with Attack
Review Generators (ARGs), RBRSs with adversarial training are much more robust
to malicious reviews