6 research outputs found

    Your password is music to my ears: cloud-based authentication using sound

    Get PDF
    This paper details the research in progress into identifying and addressing the threats faced by voice assistants and audio based digital systems. The popularity of these systems continues to grow as does the number of applications and scenarios they are used in. Smart speakers, smart home devices, mobile phones, telephone banking, and even vehicle controls all benefit from being able to be controlled to some extend by voice without diverting the attention of the user to a screen or having to use an input device such as a screen or keyboard. Whilst this removes barriers to use for those with accessibility challenges like visual impairment or motor skills issues and opens up a much more convenient user experience, a number of cyber security threats remain unanswered. This paper details a threat modeling exercise and suggests a model to address the key threats whilst retaining the usability associated with voice driven systems, by using an additional sound-based authentication factor

    Enhancing cyber security using audio techniques: a public key infrastructure for sound

    Get PDF
    This paper details the research into using audio signal processing methods to provide authentication and identification services for the purpose of enhancing cyber security in voice applications. Audio is a growing domain for cyber security technology. It is envisaged that over the next decade, the primary interface for issuing commands to consumer internet-enabled devices will be voice. Increasingly, devices such as desktop computers, smart speakers, cars, TV’s, phones an Internet of Things (IOT) devices all have built in voice assistants and voice activated features. This research outlines an approach to securely identify and authenticate users of audio and voice operated systems that utilises existing cryptography methods and audio steganography in a method comparable to a PKI for sound, whilst retaining the usability associated with audio and voice driven systems

    Securing voice communications using audio steganography

    Get PDF
    Although authentication of users of digital voice-based systems has been addressed by much research and many commercially available products, there are very few that perform well in terms of both usability and security in the audio domain. In addition, the use of voice biometrics has been shown to have limitations and relatively poor performance when compared to other authentication methods. We propose using audio steganography as a method of placing authentication key material into sound, such that an authentication factor can be achieved within an audio channel to supplement other methods, thus providing a multi factor authentication opportunity that retains the usability associated with voice channels. In this research we outline the challenges and threats to audio and voice-based systems in the form of an original threat model focusing on audio and voice-based systems, we outline a novel architectural model that utilises audio steganography to mitigate the threats in various authentication scenarios and finally, we conduct experimentation into hiding authentication materials into an audible sound. The experimentation focused on creating and testing a new steganographic technique which is robust to noise, resilient to steganalysis and has sufficient capacity to hold cryptographic material such as a 2048 bit RSA key in a short audio music clip of just a few seconds achieving a signal to noise ratio of over 70 dB in some scenarios. The method developed was seen to be very robust using digital transmission which has applications beyond this research. With acoustic transmission, despite the progress demonstrated in this research some challenges remain to ensure the approach achieves its full potential in noisy real-world applications and therefore the future research direction required is outlined and discussed

    Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa

    No full text
    This paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical. This paper also develops a novel attack concept which involves gaining unauthorised access to a voice-controlled system using apparently unrelated utterances. We present the results of a proof-of-co ncept study showing that it is possible to trigger actions in a voice-controlled digital assistant using utterances which are accepted by the system as a target command despite having a different meaning to the command in terms of human understanding
    corecore