2,289 research outputs found
DCASE 2018 Challenge Surrey Cross-Task convolutional neural network baseline
The Detection and Classification of Acoustic Scenes and Events (DCASE)
consists of five audio classification and sound event detection tasks: 1)
Acoustic scene classification, 2) General-purpose audio tagging of Freesound,
3) Bird audio detection, 4) Weakly-labeled semi-supervised sound event
detection and 5) Multi-channel audio classification. In this paper, we create a
cross-task baseline system for all five tasks based on a convlutional neural
network (CNN): a "CNN Baseline" system. We implemented CNNs with 4 layers and 8
layers originating from AlexNet and VGG from computer vision. We investigated
how the performance varies from task to task with the same configuration of
neural networks. Experiments show that deeper CNN with 8 layers performs better
than CNN with 4 layers on all tasks except Task 1. Using CNN with 8 layers, we
achieve an accuracy of 0.680 on Task 1, an accuracy of 0.895 and a mean average
precision (MAP) of 0.928 on Task 2, an accuracy of 0.751 and an area under the
curve (AUC) of 0.854 on Task 3, a sound event detection F1 score of 20.8% on
Task 4, and an F1 score of 87.75% on Task 5. We released the Python source code
of the baseline systems under the MIT license for further research.Comment: Accepted by DCASE 2018 Workshop. 4 pages. Source code availabl
Robust sound event detection in bioacoustic sensor networks
Bioacoustic sensors, sometimes known as autonomous recording units (ARUs),
can record sounds of wildlife over long periods of time in scalable and
minimally invasive ways. Deriving per-species abundance estimates from these
sensors requires detection, classification, and quantification of animal
vocalizations as individual acoustic events. Yet, variability in ambient noise,
both over time and across sensors, hinders the reliability of current automated
systems for sound event detection (SED), such as convolutional neural networks
(CNN) in the time-frequency domain. In this article, we develop, benchmark, and
combine several machine listening techniques to improve the generalizability of
SED models across heterogeneous acoustic environments. As a case study, we
consider the problem of detecting avian flight calls from a ten-hour recording
of nocturnal bird migration, recorded by a network of six ARUs in the presence
of heterogeneous background noise. Starting from a CNN yielding
state-of-the-art accuracy on this task, we introduce two noise adaptation
techniques, respectively integrating short-term (60 milliseconds) and long-term
(30 minutes) context. First, we apply per-channel energy normalization (PCEN)
in the time-frequency domain, which applies short-term automatic gain control
to every subband in the mel-frequency spectrogram. Secondly, we replace the
last dense layer in the network by a context-adaptive neural network (CA-NN)
layer. Combining them yields state-of-the-art results that are unmatched by
artificial data augmentation alone. We release a pre-trained version of our
best performing system under the name of BirdVoxDetect, a ready-to-use detector
of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019;
revised August 2019; published October 201
Mosquito Detection with Neural Networks: The Buzz of Deep Learning
Many real-world time-series analysis problems are characterised by scarce
data. Solutions typically rely on hand-crafted features extracted from the time
or frequency domain allied with classification or regression engines which
condition on this (often low-dimensional) feature vector. The huge advances
enjoyed by many application domains in recent years have been fuelled by the
use of deep learning architectures trained on large data sets. This paper
presents an application of deep learning for acoustic event detection in a
challenging, data-scarce, real-world problem. Our candidate challenge is to
accurately detect the presence of a mosquito from its acoustic signature. We
develop convolutional neural networks (CNNs) operating on wavelet
transformations of audio recordings. Furthermore, we interrogate the network's
predictive power by visualising statistics of network-excitatory samples. These
visualisations offer a deep insight into the relative informativeness of
components in the detection problem. We include comparisons with conventional
classifiers, conditioned on both hand-tuned and generic features, to stress the
strength of automatic deep feature learning. Detection is achieved with
performance metrics significantly surpassing those of existing algorithmic
methods, as well as marginally exceeding those attained by individual human
experts.Comment: For data and software related to this paper, see
http://humbug.ac.uk/kiskin2017/. Submitted as a conference paper to ECML 201
Stacked Convolutional and Recurrent Neural Networks for Bird Audio Detection
This paper studies the detection of bird calls in audio segments using
stacked convolutional and recurrent neural networks. Data augmentation by
blocks mixing and domain adaptation using a novel method of test mixing are
proposed and evaluated in regard to making the method robust to unseen data.
The contributions of two kinds of acoustic features (dominant frequency and log
mel-band energy) and their combinations are studied in the context of bird
audio detection. Our best achieved AUC measure on five cross-validations of the
development data is 95.5% and 88.1% on the unseen evaluation data.Comment: Accepted for European Signal Processing Conference 201
- …