6,777 research outputs found
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
์ํฅ ์ด๋ฒคํธ ํ์ง๋ฅผ ์ํ ํจ์จ์ ๋ฐ์ดํฐ ํ์ฉ ๋ฐ ์ฝํ ๊ต์ฌํ์ต ๊ธฐ๋ฒ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ,2020. 2. ๊น๋จ์.Conventional audio event detection (AED) models are based on supervised approaches. For supervised approaches, strongly labeled data is required. However, collecting large-scale strongly labeled data of audio events is challenging due to the diversity of audio event types and labeling difficulties. In this thesis, we propose data-efficient and weakly supervised techniques for AED.
In the first approach, a data-efficient AED system is proposed. In the proposed system, data augmentation is performed to deal with the data sparsity problem and generate polyphonic event examples. An exemplar-based noise reduction algorithm is proposed for feature enhancement. For polyphonic event detection, a multi-labeled deep neural network (DNN) classifier is employed. An adaptive thresholding algorithm is applied as a post-processing method for robust event detection in noisy conditions. From the experimental results, the proposed algorithm has shown promising performance for AED on a low-resource dataset.
In the second approach, a convolutional neural network (CNN)-based audio tagging system is proposed. The proposed model consists of a local detector and a global classifier. The local detector detects local audio words that contain distinct characteristics of events, and the global classifier summarizes the information to predict audio events on the recording. From the experimental results, we have found that the proposed model outperforms conventional artificial neural network models.
In the final approach, we propose a weakly supervised AED model. The proposed model takes advantage of strengthening feature propagation from DenseNet and modeling channel-wise relationships by SENet. Also, the correlations among segments in audio recordings are represented by a recurrent neural network (RNN) and conditional random field (CRF). RNN utilizes contextual information and CRF post-processing helps to refine segment-level predictions. We evaluate our proposed method and compare its performance with a CNN based baseline approach. From a number of experiments, it has been shown that the proposed method is effective both on audio tagging and weakly supervised AED.์ผ๋ฐ์ ์ธ ์ํฅ ์ด๋ฒคํธ ํ์ง ์์คํ
์ ๊ต์ฌํ์ต์ ํตํด ํ๋ จ๋๋ค. ๊ต์ฌํ์ต์ ์ํด์๋ ๊ฐํ ๋ ์ด๋ธ ๋ฐ์ดํฐ๊ฐ ์๊ตฌ๋๋ค. ํ์ง๋ง ๊ฐํ ๋ ์ด๋ธ ๋ฐ์ดํฐ๋ ์ํฅ ์ด๋ฒคํธ์ ๋ค์์ฑ ๋ฐ ๋ ์ด๋ธ์ ๋์ด๋๋ก ์ธํด ํฐ ๋ฐ์ดํฐ๋ฒ ์ด์ค๋ฅผ ๊ตฌ์ถํ๊ธฐ ์ด๋ ต๋ค๋ ๋ฌธ์ ๊ฐ ์๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ ์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ์ํฅ ์ด๋ฒคํธ ํ์ง๋ฅผ ์ํ ๋ฐ์ดํฐ ํจ์จ์ ํ์ฉ ๋ฐ ์ฝํ ๊ต์ฌํ์ต ๊ธฐ๋ฒ์ ๋ํด ์ ์ํ๋ค.
์ฒซ ๋ฒ์งธ ์ ๊ทผ๋ฒ์ผ๋ก์, ๋ฐ์ดํฐ ํจ์จ์ ์ธ ์ํฅ ์ด๋ฒคํธ ํ์ง ์์คํ
์ ์ ์ํ๋ค. ์ ์๋ ์์คํ
์์๋ ๋ฐ์ดํฐ ์ฆ๋ ๊ธฐ๋ฒ์ ์ฌ์ฉํด ๋ฐ์ดํฐ ํฌ์์ฑ ๋ฌธ์ ์ ๋์ํ๊ณ ์ค์ฒฉ ์ด๋ฒคํธ ๋ฐ์ดํฐ๋ฅผ ์์ฑํ์๋ค. ํน์ง ๋ฒกํฐ ํฅ์์ ์ํด ์ก์ ์ต์ ๊ธฐ๋ฒ์ด ์ฌ์ฉ๋์๊ณ ์ค์ฒฉ ์ํฅ ์ด๋ฒคํธ ํ์ง๋ฅผ ์ํด ๋ค์ค ๋ ์ด๋ธ ์ฌ์ธต ์ธ๊ณต์ ๊ฒฝ๋ง(DNN) ๋ถ๋ฅ๊ธฐ๊ฐ ์ฌ์ฉ๋์๋ค. ์คํ ๊ฒฐ๊ณผ, ์ ์๋ ์๊ณ ๋ฆฌ์ฆ์ ๋ถ์ถฉ๋ถํ ๋ฐ์ดํฐ์์๋ ์ฐ์ํ ์ํฅ ์ด๋ฒคํธ ํ์ง ์ฑ๋ฅ์ ๋ํ๋ด์๋ค.
๋ ๋ฒ์งธ ์ ๊ทผ๋ฒ์ผ๋ก์, ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง(CNN) ๊ธฐ๋ฐ ์ค๋์ค ํ๊น
์์คํ
์ ์ ์ํ๋ค. ์ ์๋ ๋ชจ๋ธ์ ๋ก์ปฌ ๊ฒ์ถ๊ธฐ์ ๊ธ๋ก๋ฒ ๋ถ๋ฅ๊ธฐ๋ก ๊ตฌ์ฑ๋๋ค. ๋ก์ปฌ ๊ฒ์ถ๊ธฐ๋ ๊ณ ์ ํ ์ํฅ ์ด๋ฒคํธ ํน์ฑ์ ํฌํจํ๋ ๋ก์ปฌ ์ค๋์ค ๋จ์ด๋ฅผ ๊ฐ์งํ๊ณ ๊ธ๋ก๋ฒ ๋ถ๋ฅ๊ธฐ๋ ํ์ง๋ ์ ๋ณด๋ฅผ ์์ฝํ์ฌ ์ค๋์ค ์ด๋ฒคํธ๋ฅผ ์์ธกํ๋ค. ์คํ ๊ฒฐ๊ณผ, ์ ์๋ ๋ชจ๋ธ์ด ๊ธฐ์กด ์ธ๊ณต์ ๊ฒฝ๋ง ๊ธฐ๋ฒ๋ณด๋ค ์ฐ์ํ ์ฑ๋ฅ์ ๋ํ๋ด์๋ค.
๋ง์ง๋ง ์ ๊ทผ๋ฒ์ผ๋ก์, ์ฝํ ๊ต์ฌํ์ต ์ํฅ ์ด๋ฒคํธ ํ์ง ๋ชจ๋ธ์ ์ ์ํ๋ค. ์ ์๋ ๋ชจ๋ธ์ DenseNet์ ๊ตฌ์กฐ๋ฅผ ํ์ฉํ์ฌ ์ ๋ณด์ ์ํํ ํ๋ฆ์ ๊ฐ๋ฅํ๊ฒ ํ๊ณ SENet์ ํ์ฉํด ์ฑ๋๊ฐ์ ์๊ด๊ด๊ณ๋ฅผ ๋ชจ๋ธ๋ง ํ๋ค. ๋ํ, ์ค๋์ค ์ ํธ์์ ๋ถ๋ถ ๊ฐ์ ์๊ด๊ด๊ณ ์ ๋ณด๋ฅผ ์ฌ์ํ ์ ๊ฒฝ๋ง(RNN) ๋ฐ ์กฐ๊ฑด๋ถ ๋ฌด์์ ํ๋(CRF)๋ฅผ ์ฌ์ฉํด ํ์ฉํ์๋ค. ์ฌ๋ฌ ์คํ์ ํตํด ์ ์๋ ๋ชจ๋ธ์ด ๊ธฐ์กด CNN ๊ธฐ๋ฐ ๊ธฐ๋ฒ๋ณด๋ค ์ค๋์ค ํ๊น
๋ฐ ์ํฅ ์ด๋ฒคํธ ํ์ง ๋ชจ๋์์ ๋ ๋์ ์ฑ๋ฅ์ ๋ํ๋์ ๋ณด์๋ค.1 Introduction 1
2 Audio Event Detection 5
2.1 Data-Ecient Audio Event Detection 6
2.2 Audio Tagging 7
2.3 Weakly Supervised Audio Event Detection 9
2.4 Metrics 10
3 Data-Ecient Techniques for Audio Event Detection 17
3.1 Introduction 17
3.2 DNN-Based AED system 18
3.2.1 Data Augmentation 20
3.2.2 Exemplar-Based Approach for Noise Reduction 21
3.2.3 DNN Classier 22
3.2.4 Post-Processing 23
3.3 Experiments 24
3.4 Summary 27
4 Audio Tagging using Local Detector and Global Classier 29
4.1 Introduction 29
4.2 CNN-Based Audio Tagging Model 31
4.2.1 Local Detector and Global Classier 32
4.2.2 Temporal Localization of Events 34
4.3 Experiments 34
4.3.1 Dataset and Feature 34
4.3.2 Model Training 35
4.3.3 Results 36
4.4 Summary 39
5 Deep Convolutional Neural Network with Structured Prediction for Weakly Supervised Audio Event Detection 41
5.1 Introduction 41
5.2 CNN with Structured Prediction for Weakly Supervised AED 46
5.2.1 DenseNet 47
5.2.2 Squeeze-and-Excitation 48
5.2.3 Global Pooling for Aggregation 49
5.2.4 Structured Prediction for Accurate Event Localization 50
5.3 Experiments 53
5.3.1 Dataset 53
5.3.2 Feature Extraction 54
5.3.3 DSNet and DSNet-RNN Structures 54
5.3.4 Baseline CNN Structure 56
5.3.5 Training and Evaluation 57
5.3.6 Metrics 57
5.3.7 Results and Discussion 58
5.3.8 Comparison with the DCASE 2017 task 4 Results 61
5.4 Summary 62
6 Conclusions 65
Bibliography 67
์ ์ฝ 77
๊ฐ์ฌ์ ๊ธ 79Docto
Deep CNN Framework for Audio Event Recognition using Weakly Labeled Web Data
The development of audio event recognition models requires labeled training
data, which are generally hard to obtain. One promising source of recordings of
audio events is the large amount of multimedia data on the web. In particular,
if the audio content analysis must itself be performed on web audio, it is
important to train the recognizers themselves from such data. Training from
these web data, however, poses several challenges, the most important being the
availability of labels : labels, if any, that may be obtained for the data are
generally {\em weak}, and not of the kind conventionally required for training
detectors or classifiers. We propose that learning algorithms that can exploit
weak labels offer an effective method to learn from web data. We then propose a
robust and efficient deep convolutional neural network (CNN) based framework to
learn audio event recognizers from weakly labeled data. The proposed method can
train from and analyze recordings of variable length in an efficient manner and
outperforms a network trained with {\em strongly labeled} web data by a
considerable margin
Data-Efficient Weakly Supervised Learning for Low-Resource Audio Event Detection Using Deep Learning
5 pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:1807.03697We propose a method to perform audio event detection under the common constraint that only limited training data are available. In training a deep learning system to perform audio event detection, two practical problems arise. Firstly, most datasets are "weakly labelled" having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amount of labelled training data to achieve good quality performance, yet in practice it is difficult to collect enough samples for most classes of interest. In this paper, we propose a data-efficient training of a stacked convolutional and recurrent neural network. This neural network is trained in a multi instance learning setting for which we introduce a new loss function that leads to improved training compared to the usual approaches for weakly supervised learning. We successfully test our approach on two low-resource datasets that lack temporal labels
Knowledge Transfer from Weakly Labeled Audio using Convolutional Neural Network for Sound Events and Scenes
In this work we propose approaches to effectively transfer knowledge from
weakly labeled web audio data. We first describe a convolutional neural network
(CNN) based framework for sound event detection and classification using weakly
labeled audio data. Our model trains efficiently from audios of variable
lengths; hence, it is well suited for transfer learning. We then propose
methods to learn representations using this model which can be effectively used
for solving the target task. We study both transductive and inductive transfer
learning tasks, showing the effectiveness of our methods for both domain and
task adaptation. We show that the learned representations using the proposed
CNN model generalizes well enough to reach human level accuracy on ESC-50 sound
events dataset and set state of art results on this dataset. We further use
them for acoustic scene classification task and once again show that our
proposed approaches suit well for this task as well. We also show that our
methods are helpful in capturing semantic meanings and relations as well.
Moreover, in this process we also set state-of-art results on Audioset dataset,
relying on balanced training set.Comment: ICASSP 201
Sample Mixed-Based Data Augmentation for Domestic Audio Tagging
Audio tagging has attracted increasing attention since last decade and has
various potential applications in many fields. The objective of audio tagging
is to predict the labels of an audio clip. Recently deep learning methods have
been applied to audio tagging and have achieved state-of-the-art performance,
which provides a poor generalization ability on new data. However due to the
limited size of audio tagging data such as DCASE data, the trained models tend
to result in overfitting of the network. Previous data augmentation methods
such as pitch shifting, time stretching and adding background noise do not show
much improvement in audio tagging. In this paper, we explore the sample mixed
data augmentation for the domestic audio tagging task, including mixup,
SamplePairing and extrapolation. We apply a convolutional recurrent neural
network (CRNN) with attention module with log-scaled mel spectrum as a baseline
system. In our experiments, we achieve an state-of-the-art of equal error rate
(EER) of 0.10 on DCASE 2016 task4 dataset with mixup approach, outperforming
the baseline system without data augmentation.Comment: submitted to the workshop of Detection and Classification of Acoustic
Scenes and Events 2018 (DCASE 2018), 19-20 November 2018, Surrey, U
Strategies for Searching Video Content with Text Queries or Video Examples
The large number of user-generated videos uploaded on to the Internet
everyday has led to many commercial video search engines, which mainly rely on
text metadata for search. However, metadata is often lacking for user-generated
videos, thus these videos are unsearchable by current search engines.
Therefore, content-based video retrieval (CBVR) tackles this metadata-scarcity
problem by directly analyzing the visual and audio streams of each video. CBVR
encompasses multiple research topics, including low-level feature design,
feature fusion, semantic detector training and video search/reranking. We
present novel strategies in these topics to enhance CBVR in both accuracy and
speed under different query inputs, including pure textual queries and query by
video examples. Our proposed strategies have been incorporated into our
submission for the TRECVID 2014 Multimedia Event Detection evaluation, where
our system outperformed other submissions in both text queries and video
example queries, thus demonstrating the effectiveness of our proposed
approaches
- โฆ