Deep speech classification has achieved tremendous success and greatly
promoted the emergence of many real-world applications. However, backdoor
attacks present a new security threat to it, particularly with untrustworthy
third-party platforms, as pre-defined triggers set by the attacker can activate
the backdoor. Most of the triggers in existing speech backdoor attacks are
sample-agnostic, and even if the triggers are designed to be unnoticeable, they
can still be audible. This work explores a backdoor attack that utilizes
sample-specific triggers based on voice conversion. Specifically, we adopt a
pre-trained voice conversion model to generate the trigger, ensuring that the
poisoned samples does not introduce any additional audible noise. Extensive
experiments on two speech classification tasks demonstrate the effectiveness of
our attack. Furthermore, we analyzed the specific scenarios that activated the
proposed backdoor and verified its resistance against fine-tuning.Comment: Accepted by INTERSPEECH 202