3,032 research outputs found
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
SEGAN: Speech Enhancement Generative Adversarial Network
Current speech enhancement techniques operate on the spectral domain and/or
exploit some higher-level feature. The majority of them tackle a limited number
of noise conditions and rely on first-order statistics. To circumvent these
issues, deep networks are being increasingly used, thanks to their ability to
learn complex functions from large example sets. In this work, we propose the
use of generative adversarial networks for speech enhancement. In contrast to
current techniques, we operate at the waveform level, training the model
end-to-end, and incorporate 28 speakers and 40 different noise conditions into
the same model, such that model parameters are shared across them. We evaluate
the proposed model using an independent, unseen test set with two speakers and
20 alternative noise conditions. The enhanced samples confirm the viability of
the proposed model, and both objective and subjective evaluations confirm the
effectiveness of it. With that, we open the exploration of generative
architectures for speech enhancement, which may progressively incorporate
further speech-centric design choices to improve their performance.Comment: 5 pages, 4 figures, accepted in INTERSPEECH 201
Single channel speech enhancement by colored spectrograms
Speech enhancement concerns the processes required to remove unwanted
background sounds from the target speech to improve its quality and
intelligibility. In this paper, a novel approach for single-channel speech
enhancement is presented, using colored spectrograms. We propose the use of a
deep neural network (DNN) architecture adapted from the pix2pix generative
adversarial network (GAN) and train it over colored spectrograms of speech to
denoise them. After denoising, the colors of spectrograms are translated to
magnitudes of short-time Fourier transform (STFT) using a shallow regression
neural network. These estimated STFT magnitudes are later combined with the
noisy phases to obtain an enhanced speech. The results show an improvement of
almost 0.84 points in the perceptual evaluation of speech quality (PESQ) and 1%
in the short-term objective intelligibility (STOI) over the unprocessed noisy
data. The gain in quality and intelligibility over the unprocessed signal is
almost equal to the gain achieved by the baseline methods used for comparison
with the proposed model, but at a much reduced computational cost. The proposed
solution offers a comparative PESQ score at almost 10 times reduced
computational cost than a similar baseline model that has generated the highest
PESQ score trained on grayscaled spectrograms, while it provides only a 1%
deficit in STOI at 28 times reduced computational cost when compared to another
baseline system based on convolutional neural network-GAN (CNN-GAN) that
produces the most intelligible speech.Comment: 18 pages, 6 figures, 5 table
- …