3 research outputs found

    Analysing and Preventing Self-Issued Voice Commands

    Get PDF

    Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa

    No full text
    This paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical. This paper also develops a novel attack concept which involves gaining unauthorised access to a voice-controlled system using apparently unrelated utterances. We present the results of a proof-of-co ncept study showing that it is possible to trigger actions in a voice-controlled digital assistant using utterances which are accepted by the system as a target command despite having a different meaning to the command in terms of human understanding
    corecore