17,476 research outputs found
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems
Voice Processing Systems (VPSes), now widely deployed, have been made
significantly more accurate through the application of recent advances in
machine learning. However, adversarial machine learning has similarly advanced
and has been used to demonstrate that VPSes are vulnerable to the injection of
hidden commands - audio obscured by noise that is correctly recognized by a VPS
but not by human beings. Such attacks, though, are often highly dependent on
white-box knowledge of a specific machine learning model and limited to
specific microphones and speakers, making their use across different acoustic
hardware platforms (and thus their practicality) limited. In this paper, we
break these dependencies and make hidden command attacks more practical through
model-agnostic (blackbox) attacks, which exploit knowledge of the signal
processing algorithms commonly used by VPSes to generate the data fed into
machine learning systems. Specifically, we exploit the fact that multiple
source audio samples have similar feature vectors when transformed by acoustic
feature extraction algorithms (e.g., FFTs). We develop four classes of
perturbations that create unintelligible audio and test them against 12 machine
learning models, including 7 proprietary models (e.g., Google Speech API, Bing
Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful
attacks against all targets. Moreover, we successfully use our maliciously
generated audio samples in multiple hardware configurations, demonstrating
effectiveness across both models and real systems. In so doing, we demonstrate
that domain-specific knowledge of audio signal processing represents a
practical means of generating successful hidden voice command attacks
Feature Learning from Spectrograms for Assessment of Personality Traits
Several methods have recently been proposed to analyze speech and
automatically infer the personality of the speaker. These methods often rely on
prosodic and other hand crafted speech processing features extracted with
off-the-shelf toolboxes. To achieve high accuracy, numerous features are
typically extracted using complex and highly parameterized algorithms. In this
paper, a new method based on feature learning and spectrogram analysis is
proposed to simplify the feature extraction process while maintaining a high
level of accuracy. The proposed method learns a dictionary of discriminant
features from patches extracted in the spectrogram representations of training
speech segments. Each speech segment is then encoded using the dictionary, and
the resulting feature set is used to perform classification of personality
traits. Experiments indicate that the proposed method achieves state-of-the-art
results with a significant reduction in complexity when compared to the most
recent reference methods. The number of features, and difficulties linked to
the feature extraction process are greatly reduced as only one type of
descriptors is used, for which the 6 parameters can be tuned automatically. In
contrast, the simplest reference method uses 4 types of descriptors to which 6
functionals are applied, resulting in over 20 parameters to be tuned.Comment: 12 pages, 3 figure
A dataset of continuous affect annotations and physiological signals for emotion analysis
From a computational viewpoint, emotions continue to be intriguingly hard to
understand. In research, direct, real-time inspection in realistic settings is
not possible. Discrete, indirect, post-hoc recordings are therefore the norm.
As a result, proper emotion assessment remains a problematic issue. The
Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as
it focusses on real-time continuous annotation of emotions, as experienced by
the participants, while watching various videos. For this purpose, a novel,
intuitive joystick-based annotation interface was developed, that allowed for
simultaneous reporting of valence and arousal, that are instead often annotated
independently. In parallel, eight high quality, synchronized physiological
recordings (1000 Hz, 16-bit ADC) were made of ECG, BVP, EMG (3x), GSR (or EDA),
respiration and skin temperature. The dataset consists of the physiological and
annotation data from 30 participants, 15 male and 15 female, who watched
several validated video-stimuli. The validity of the emotion induction, as
exemplified by the annotation and physiological data, is also presented.Comment: Dataset available at:
https://rmc.dlr.de/download/CASE_dataset/CASE_dataset.zi
Extended pipeline for content-based feature engineering in music genre recognition
We present a feature engineering pipeline for the construction of musical
signal characteristics, to be used for the design of a supervised model for
musical genre identification. The key idea is to extend the traditional
two-step process of extraction and classification with additive stand-alone
phases which are no longer organized in a waterfall scheme. The whole system is
realized by traversing backtrack arrows and cycles between various stages. In
order to give a compact and effective representation of the features, the
standard early temporal integration is combined with other selection and
extraction phases: on the one hand, the selection of the most meaningful
characteristics based on information gain, and on the other hand, the inclusion
of the nonlinear correlation between this subset of features, determined by an
autoencoder. The results of the experiments conducted on GTZAN dataset reveal a
noticeable contribution of this methodology towards the model's performance in
classification task.Comment: ICASSP 201
- …