116 research outputs found
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
SingNet: A Real-time Singing Voice Beat and Downbeat Tracking System
Singing voice beat and downbeat tracking posses several applications in
automatic music production, analysis and manipulation. Among them, some require
real-time processing, such as live performance processing and
auto-accompaniment for singing inputs. This task is challenging owing to the
non-trivial rhythmic and harmonic patterns in singing signals. For real-time
processing, it introduces further constraints such as inaccessibility to future
data and the impossibility to correct the previous results that are
inconsistent with the latter ones. In this paper, we introduce the first system
that tracks the beats and downbeats of singing voices in real-time.
Specifically, we propose a novel dynamic particle filtering approach that
incorporates offline historical data to correct the online inference by using a
variable number of particles. We evaluate the performance on two datasets:
GTZAN with the separated vocal tracks, and an in-house dataset with the
original vocal stems. Experimental result demonstrates that our proposed
approach outperforms the baseline by 3-5%.Comment: Accepted for 2023 International Conference on Acoustics, Speech, and
Signal Processing (ICASSP-2023
- …