127 research outputs found
Deep Convolutional Neural Networks as Generic Feature Extractors
Recognizing objects in natural images is an intricate problem involving
multiple conflicting objectives. Deep convolutional neural networks, trained on
large datasets, achieve convincing results and are currently the
state-of-the-art approach for this task. However, the long time needed to train
such deep networks is a major drawback. We tackled this problem by reusing a
previously trained network. For this purpose, we first trained a deep
convolutional network on the ILSVRC2012 dataset. We then maintained the learned
convolution kernels and only retrained the classification part on different
datasets. Using this approach, we achieved an accuracy of 67.68 % on CIFAR-100,
compared to the previous state-of-the-art result of 65.43 %. Furthermore, our
findings indicate that convolutional networks are able to learn generic feature
extractors that can be used for different tasks.Comment: 4 pages, accepted version for publication in Proceedings of the IEEE
International Joint Conference on Neural Networks (IJCNN), July 2015,
Killarney, Irelan
Comparing Time and Frequency Domain for Audio Event Recognition Using Deep Learning
Recognizing acoustic events is an intricate problem for a machine and an emerging field of research. Deep neural networks achieve convincing results and are currently the state-of-the-art approach for many tasks. One advantage is their implicit feature learning, opposite to an explicit feature extraction of the input signal. In this work, we analyzed whether more discriminative features can be learned from either the time-domain or the frequency-domain representation of the audio signal. For this purpose, we trained multiple deep networks with different architectures on the Freiburg-106 and ESC-10 datasets. Our results show that feature learning from the frequency domain is superior to the time domain. Moreover, additionally using convolution and pooling layers, to explore local structures of the audio signal, significantly improves the recognition performance and achieves state-of-the-art results
Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks
We present in this paper a simple, yet efficient convolutional neural network (CNN) architecture for robust audio event recognition. Opposing to deep CNN architectures with multiple convolutional and pooling layers topped up with multiple fully connected layers, the proposed network consists of only three layers: convolutional, pooling, and softmax layer. Two further features distinguish it from the deep architectures that have been proposed for the task: varying-size convolutional filters at the convolutional layer and 1-max pooling scheme at the pooling layer. In intuition, the network tends to select the most discriminative features from the whole audio signals for recognition. Our proposed CNN not only shows state-of-the-art performance on the standard task of robust audio event recognition but also outperforms other deep architectures up to 4.5% in terms of recognition accuracy, which is equivalent to 76.3% relative error reduction
Audio Phrases for Audio Event Recognition
The bag-of-audio-words approach has been widely used for audio event recognition. In these models, a local feature of an audio signal is matched to a code word according to a learned codebook. The signal is then represented by frequencies of the matched code words on the whole signal. We present in this paper an improved model based on the idea of audio phrases which are sequences of multiple audio words. By using audio phrases, we are able to capture the relationship between the isolated audio words and produce more semantic descriptors. Furthermore, we also propose an efficient approach to learn a compact codebook in a discriminative manner to deal with high-dimensionality of bag-of-audio-phrases representations. Experiments on the Freiburg-106 dataset show that the recognition performance with our proposed bag-of-audio-phrases descriptor outperforms not only the baselines but also the state-of-the-art results on the dataset
Representing Nonspeech Audio Signals through Speech Classification Models
The human auditory system is very well matched to both human speech and environmental sounds. Therefore, the question arises whether human speech material may provide useful information for training systems for analyzing nonspeech audio signals, such as in a recognition task. To find out how similar nonspeech signals are to speech, we measure the closeness between target nonspeech signals and different basis speech categories via a speech classification model. The speech similarities are finally employed as a descriptor to represent the target signal. We further show that a better descriptor can be obtained by learning to organize the speech categories hierarchically with a tree structure. We conduct experiments for the audio event analysis application by using speech words from the TIMIT dataset to learn the descriptors for the audio events of the Freiburg-106 dataset. Our results on the event recognition task outperform those achieved by the best system even though a simple linear classifier is used. Furthermore, integrating the learned descriptors as an additional source leads to improved performance
- …