2,399 research outputs found
Frame Theory for Signal Processing in Psychoacoustics
This review chapter aims to strengthen the link between frame theory and
signal processing tasks in psychoacoustics. On the one side, the basic concepts
of frame theory are presented and some proofs are provided to explain those
concepts in some detail. The goal is to reveal to hearing scientists how this
mathematical theory could be relevant for their research. In particular, we
focus on frame theory in a filter bank approach, which is probably the most
relevant view-point for audio signal processing. On the other side, basic
psychoacoustic concepts are presented to stimulate mathematicians to apply
their knowledge in this field
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems
Voice Processing Systems (VPSes), now widely deployed, have been made
significantly more accurate through the application of recent advances in
machine learning. However, adversarial machine learning has similarly advanced
and has been used to demonstrate that VPSes are vulnerable to the injection of
hidden commands - audio obscured by noise that is correctly recognized by a VPS
but not by human beings. Such attacks, though, are often highly dependent on
white-box knowledge of a specific machine learning model and limited to
specific microphones and speakers, making their use across different acoustic
hardware platforms (and thus their practicality) limited. In this paper, we
break these dependencies and make hidden command attacks more practical through
model-agnostic (blackbox) attacks, which exploit knowledge of the signal
processing algorithms commonly used by VPSes to generate the data fed into
machine learning systems. Specifically, we exploit the fact that multiple
source audio samples have similar feature vectors when transformed by acoustic
feature extraction algorithms (e.g., FFTs). We develop four classes of
perturbations that create unintelligible audio and test them against 12 machine
learning models, including 7 proprietary models (e.g., Google Speech API, Bing
Speech API, IBM Speech API, Azure Speaker API, etc), and demonstrate successful
attacks against all targets. Moreover, we successfully use our maliciously
generated audio samples in multiple hardware configurations, demonstrating
effectiveness across both models and real systems. In so doing, we demonstrate
that domain-specific knowledge of audio signal processing represents a
practical means of generating successful hidden voice command attacks
Score extraction usign MPEG-4 T/F partial encoding
This paper describes the preliminary work in the development of an MPEG-4 audio transcoder between the time/frequency (T/F) and the structured audio (SA) formats. Our approach consists in not going from T/F format through to waveform data and back again to SA, but extracting the score information from an intermediate stage. For this intermediate form we have chosen the input of the filterbank and block switching tool, which consists of frequency data. This data is the result of windowing and applying the modified discrete cosine transform (MDCT) to the signal. The size of the window to be used is determined in a frame-by-frame basis by a psychoacoustics analysis of the data. In this paper we show that this approach is feasible by developing a system which extracts the score information from the filterbank and block switching tool output in a MPEG-4 T/F encoder by adapting and fine-tuning some existing processing techniques.Peer ReviewedPostprint (published version
Towards Emotion Recognition: A Persistent Entropy Application
Emotion recognition and classification is a very active area of research. In
this paper, we present a first approach to emotion classification using
persistent entropy and support vector machines. A topology-based model is
applied to obtain a single real number from each raw signal. These data are
used as input of a support vector machine to classify signals into 8 different
emotions (calm, happy, sad, angry, fearful, disgust and surprised)
Towards Emotion Recognition: A Persistent Entropy Application
Emotion recognition and classification is a very active area of research. In this paper, we present
a first approach to emotion classification using persistent entropy and support vector machines. A
topology-based model is applied to obtain a single real number from each raw signal. These data are
used as input of a support vector machine to classify signals into 8 different emotions (calm, happy,
sad, angry, fearful, disgust and surprised)
Bandwidth extension of narrowband speech
Recently, 4G mobile phone systems have been
designed to process wideband speech signals whose
sampling frequency is 16 kHz. However, most part of
mobile and classical phone network, and current 3G
mobile phones, still process narrowband speech signals
whose sampling frequency is 8 kHz. During next future,
all these systems must be living together. Therefore,
sometimes a wideband speech signal (with a bandwidth up
to 7,2 kHz) should be estimated from an available
narrowband one (whose frequency band is 300-3400 Hz).
In this work, different techniques of audio bandwidth
extension have been implemented and evaluated. First, a
simple non-model-based algorithm (interpolation
algorithm) has been implemented. Second, a model-based
algorithm (linear mapping) have been designed and
evaluated in comparison to previous one. Several CMOS
(Comparison Mean Opinion Score) [6] listening tests show
that performance of Linear Mapping algorithm clearly
overcomes the other one. Results of these tests are very
close to those corresponding to original wideband speech
signal.Postprint (published version
Sparsity and cosparsity for audio declipping: a flexible non-convex approach
This work investigates the empirical performance of the sparse synthesis
versus sparse analysis regularization for the ill-posed inverse problem of
audio declipping. We develop a versatile non-convex heuristics which can be
readily used with both data models. Based on this algorithm, we report that, in
most cases, the two models perform almost similarly in terms of signal
enhancement. However, the analysis version is shown to be amenable for real
time audio processing, when certain analysis operators are considered. Both
versions outperform state-of-the-art methods in the field, especially for the
severely saturated signals
Statistical Spectral Parameter Estimation of Acoustic Signals with Applications to Byzantine Music
Digitized acoustical signals of Byzantine music performed by Iakovos Nafpliotis are used to extract the fundamental frequency of each note of the diatonic scale. These empirical results are then contrasted to the theoretical suggestions and previous empirical findings. Several parametric and non-parametric spectral parameter estimation methods are implemented. These include: (1) Phase vocoder method, (2) McAulay-Quatieri method, (3) Levinson-Durbin algorithm,(4) YIN, (5) Quinn & Fernandes Estimator, (6) Pisarenko Frequency Estimator, (7) MUltiple SIgnal Characterization (MUSIC) algorithm, (8) Periodogram method, (9) Quinn & Fernandes Filtered Periodogram, (10) Rife & Vincent Estimator, and (11) the Fourier transform. Algorithm performance was very precise. The psychophysical aspect of human pitch discrimination is explored. The results of eight (8) psychoacoustical experiments were used to determine the aural just noticeable difference (jnd) in pitch and deduce patterns utilized to customize acceptable performable pitch deviation to the application at hand. These customizations [Acceptable Performance Difference (a new measure of frequency differential acceptability), Perceptual Confidence Intervals (a new concept of confidence intervals based on psychophysical experiment rather than statistics of performance data), and one based purely on music-theoretical asymphony] are proposed, discussed, and used in interpretation of results. The results suggest that Nafpliotis\u27 intervals are closer to just intonation than Byzantine theory (with minor exceptions), something not generally found in Thrasivoulos Stanitsas\u27 data. Nafpliotis\u27 perfect fifth is identical to the just intonation, even though he overstretches his octaveby fifteen (15)cents. His perfect fourth is also more just, as opposed to Stanitsas\u27 fourth which is directionally opposite. Stanitsas\u27 tendency to exaggerate the major third interval A4-F4 is still seen in Nafpliotis, but curbed. This is the only noteworthy departure from just intonation, with Nafpliotis being exactly Chrysanthian (the most exaggerated theoretical suggestion of all) and Stanitsas overstretching it even more than Nafpliotis and Chrysanth. Nafpliotis ascends in the second tetrachord more robustly diatonically than Stanitsas. The results are reported and interpreted within the framework of Acceptable Performance Differences
- …