39,342 research outputs found
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Optical Music Recognition with Convolutional Sequence-to-Sequence Models
Optical Music Recognition (OMR) is an important technology within Music
Information Retrieval. Deep learning models show promising results on OMR
tasks, but symbol-level annotated data sets of sufficient size to train such
models are not available and difficult to develop. We present a deep learning
architecture called a Convolutional Sequence-to-Sequence model to both move
towards an end-to-end trainable OMR pipeline, and apply a learning process that
trains on full sentences of sheet music instead of individually labeled
symbols. The model is trained and evaluated on a human generated data set, with
various image augmentations based on real-world scenarios. This data set is the
first publicly available set in OMR research with sufficient size to train and
evaluate deep learning models. With the introduced augmentations a pitch
recognition accuracy of 81% and a duration accuracy of 94% is achieved,
resulting in a note level accuracy of 80%. Finally, the model is compared to
commercially available methods, showing a large improvements over these
applications.Comment: ISMIR 201
A Feature Learning Siamese Model for Intelligent Control of the Dynamic Range Compressor
In this paper, a siamese DNN model is proposed to learn the characteristics
of the audio dynamic range compressor (DRC). This facilitates an intelligent
control system that uses audio examples to configure the DRC, a widely used
non-linear audio signal conditioning technique in the areas of music
production, speech communication and broadcasting. Several alternative siamese
DNN architectures are proposed to learn feature embeddings that can
characterise subtle effects due to dynamic range compression. These models are
compared with each other as well as handcrafted features proposed in previous
work. The evaluation of the relations between the hyperparameters of DNN and
DRC parameters are also provided. The best model is able to produce a universal
feature embedding that is capable of predicting multiple DRC parameters
simultaneously, which is a significant improvement from our previous research.
The feature embedding shows better performance than handcrafted audio features
when predicting DRC parameters for both mono-instrument audio loops and
polyphonic music pieces.Comment: 8 pages, accepted in IJCNN 201
A Fully Convolutional Deep Auditory Model for Musical Chord Recognition
Chord recognition systems depend on robust feature extraction pipelines.
While these pipelines are traditionally hand-crafted, recent advances in
end-to-end machine learning have begun to inspire researchers to explore
data-driven methods for such tasks. In this paper, we present a chord
recognition system that uses a fully convolutional deep auditory model for
feature extraction. The extracted features are processed by a Conditional
Random Field that decodes the final chord sequence. Both processing stages are
trained automatically and do not require expert knowledge for optimising
parameters. We show that the learned auditory system extracts musically
interpretable features, and that the proposed chord recognition system achieves
results on par or better than state-of-the-art algorithms.Comment: In Proceedings of the 2016 IEEE 26th International Workshop on
Machine Learning for Signal Processing (MLSP), Vietro sul Mare, Ital
- …