39,800 research outputs found
Fingerprinting Smart Devices Through Embedded Acoustic Components
The widespread use of smart devices gives rise to both security and privacy
concerns. Fingerprinting smart devices can assist in authenticating physical
devices, but it can also jeopardize privacy by allowing remote identification
without user awareness. We propose a novel fingerprinting approach that uses
the microphones and speakers of smart phones to uniquely identify an individual
device. During fabrication, subtle imperfections arise in device microphones
and speakers which induce anomalies in produced and received sounds. We exploit
this observation to fingerprint smart devices through playback and recording of
audio samples. We use audio-metric tools to analyze and explore different
acoustic features and analyze their ability to successfully fingerprint smart
devices. Our experiments show that it is even possible to fingerprint devices
that have the same vendor and model; we were able to accurately distinguish
over 93% of all recorded audio clips from 15 different units of the same model.
Our study identifies the prominent acoustic features capable of fingerprinting
devices with high success rate and examines the effect of background noise and
other variables on fingerprinting accuracy
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues
Over the last few years, a rapidly increasing number of Internet-of-Things
(IoT) systems that adopt voice as the primary user input have emerged. These
systems have been shown to be vulnerable to various types of voice spoofing
attacks. Existing defense techniques can usually only protect from a specific
type of attack or require an additional authentication step that involves
another device. Such defense strategies are either not strong enough or lower
the usability of the system. Based on the fact that legitimate voice commands
should only come from humans rather than a playback device, we propose a novel
defense strategy that is able to detect the sound source of a voice command
based on its acoustic features. The proposed defense strategy does not require
any information other than the voice command itself and can protect a system
from multiple types of spoofing attacks. Our proof-of-concept experiments
verify the feasibility and effectiveness of this defense strategy.Comment: Proceedings of the 27th International Conference on Computer
Communications and Networks (ICCCN), Hangzhou, China, July-August 2018. arXiv
admin note: text overlap with arXiv:1803.0915
DoubleEcho: Mitigating Context-Manipulation Attacks in Copresence Verification
Copresence verification based on context can improve usability and strengthen
security of many authentication and access control systems. By sensing and
comparing their surroundings, two or more devices can tell whether they are
copresent and use this information to make access control decisions. To the
best of our knowledge, all context-based copresence verification mechanisms to
date are susceptible to context-manipulation attacks. In such attacks, a
distributed adversary replicates the same context at the (different) locations
of the victim devices, and induces them to believe that they are copresent. In
this paper we propose DoubleEcho, a context-based copresence verification
technique that leverages acoustic Room Impulse Response (RIR) to mitigate
context-manipulation attacks. In DoubleEcho, one device emits a wide-band
audible chirp and all participating devices record reflections of the chirp
from the surrounding environment. Since RIR is, by its very nature, dependent
on the physical surroundings, it constitutes a unique location signature that
is hard for an adversary to replicate. We evaluate DoubleEcho by collecting RIR
data with various mobile devices and in a range of different locations. We show
that DoubleEcho mitigates context-manipulation attacks whereas all other
approaches to date are entirely vulnerable to such attacks. DoubleEcho detects
copresence (or lack thereof) in roughly 2 seconds and works on commodity
devices
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems
Voice Processing Systems (VPSes), now widely deployed, have been made
significantly more accurate through the application of recent advances in
machine learning. However, adversarial machine learning has similarly advanced
and has been used to demonstrate that VPSes are vulnerable to the injection of
hidden commands - audio obscured by noise that is correctly recognized by a VPS
but not by human beings. Such attacks, though, are often highly dependent on
white-box knowledge of a specific machine learning model and limited to
specific microphones and speakers, making their use across different acoustic
hardware platforms (and thus their practicality) limited. In this paper, we
break these dependencies and make hidden command attacks more practical through
model-agnostic (blackbox) attacks, which exploit knowledge of the signal
processing algorithms commonly used by VPSes to generate the data fed into
machine learning systems. Specifically, we exploit the fact that multiple
source audio samples have similar feature vectors when transformed by acoustic
feature extraction algorithms (e.g., FFTs). We develop four classes of
perturbations that create unintelligible audio and test them against 12 machine
learning models, including 7 proprietary models (e.g., Google Speech API, Bing
Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful
attacks against all targets. Moreover, we successfully use our maliciously
generated audio samples in multiple hardware configurations, demonstrating
effectiveness across both models and real systems. In so doing, we demonstrate
that domain-specific knowledge of audio signal processing represents a
practical means of generating successful hidden voice command attacks
- …