142 research outputs found

    Deep neural networks for automated detection of marine mammal species

    Get PDF
    Authors thank the Bureau of Ocean Energy Management for the funding of MARU deployments, Excelerate Energy Inc. for the funding of Autobuoy deployment, and Michael J. Weise of the US Office of Naval Research for support (N000141712867).Deep neural networks have advanced the field of detection and classification and allowed for effective identification of signals in challenging data sets. Numerous time-critical conservation needs may benefit from these methods. We developed and empirically studied a variety of deep neural networks to detect the vocalizations of endangered North Atlantic right whales (Eubalaena glacialis). We compared the performance of these deep architectures to that of traditional detection algorithms for the primary vocalization produced by this species, the upcall. We show that deep-learning architectures are capable of producing false-positive rates that are orders of magnitude lower than alternative algorithms while substantially increasing the ability to detect calls. We demonstrate that a deep neural network trained with recordings from a single geographic region recorded over a span of days is capable of generalizing well to data from multiple years and across the species’ range, and that the low false positives make the output of the algorithm amenable to quality control for verification. The deep neural networks we developed are relatively easy to implement with existing software, and may provide new insights applicable to the conservation of endangered species.Publisher PDFPeer reviewe

    Mosquito Detection with Neural Networks: The Buzz of Deep Learning

    Full text link
    Many real-world time-series analysis problems are characterised by scarce data. Solutions typically rely on hand-crafted features extracted from the time or frequency domain allied with classification or regression engines which condition on this (often low-dimensional) feature vector. The huge advances enjoyed by many application domains in recent years have been fuelled by the use of deep learning architectures trained on large data sets. This paper presents an application of deep learning for acoustic event detection in a challenging, data-scarce, real-world problem. Our candidate challenge is to accurately detect the presence of a mosquito from its acoustic signature. We develop convolutional neural networks (CNNs) operating on wavelet transformations of audio recordings. Furthermore, we interrogate the network's predictive power by visualising statistics of network-excitatory samples. These visualisations offer a deep insight into the relative informativeness of components in the detection problem. We include comparisons with conventional classifiers, conditioned on both hand-tuned and generic features, to stress the strength of automatic deep feature learning. Detection is achieved with performance metrics significantly surpassing those of existing algorithmic methods, as well as marginally exceeding those attained by individual human experts.Comment: For data and software related to this paper, see http://humbug.ac.uk/kiskin2017/. Submitted as a conference paper to ECML 201

    Silbido profundo : an open source package for the use of deep learning to detect odontocete whistles

    Get PDF
    The authors wish to thank Dr. Michael Weise of the Office of Naval Research (N00014-17-1-2867, N00014-17-1-2567) for supporting this project. We also thank Anu Kumar and Mandy Shoemaker of U.S. Navy Living Marine Resources for supporting development of the data management tools used in this work (N3943020C2202).This work presents an open-source matlab software package for exploiting recent advances in extracting tonal signals from large acoustic data sets. A whistle extraction algorithm published by Li, Liu, Palmer, Fleishman, Gillespie, Nosal, Shiu, Klinck, Cholewiak, Helble, and Roch [(2020). Proceedings of the International Joint Conference on Neural Networks, July 19–24, Glasgow, Scotland, p. 10] is incorporated into silbido, an established software package for extraction of cetacean tonal calls. The precision and recall of the new system were over 96% and nearly 80%, respectively, when applied to a whistle extraction task on a challenging two-species subset of a conference-benchmark data set. A second data set was examined to assess whether the algorithm generalized to data that were collected across different recording devices and locations. These data included 487 h of weakly labeled, towed array data collected in the Pacific Ocean on two National Oceanographic and Atmospheric Administration (NOAA) cruises. Labels for these data consisted of regions of toothed whale presence for at least 15 species that were based on visual and acoustic observations and not limited to whistles. Although the lack of per whistle-level annotations prevented measurement of precision and recall, there was strong concurrence of automatic detections and the NOAA annotations, suggesting that the algorithm generalizes well to new data.Publisher PDFPeer reviewe

    LifeCLEF 2016: Multimedia Life Species Identification Challenges

    Get PDF
    International audienceUsing multimedia identification tools is considered as one of the most promising solutions to help bridge the taxonomic gap and build accurate knowledge of the identity, the geographic distribution and the evolution of living species. Large and structured communities of nature observers (e.g., iSpot, Xeno-canto, Tela Botanica, etc.) as well as big monitoring equipment have actually started to produce outstanding collections of multimedia records. Unfortunately, the performance of the state-of-the-art analysis techniques on such data is still not well understood and is far from reaching real world requirements. The LifeCLEF lab proposes to evaluate these challenges around 3 tasks related to multimedia information retrieval and fine-grained classification problems in 3 domains. Each task is based on large volumes of real-world data and the measured challenges are defined in collaboration with biologists and environmental stakeholders to reflect realistic usage scenarios. For each task, we report the methodology, the data sets as well as the results and the main outcom

    Robust detection of North Atlantic right whales using deep learning methods

    Get PDF
    This thesis begins by assessing the current state of marine mammal detection, specifically investigating currently used detection platforms and approaches of detection. The recent development of autonomous platforms provides a necessity for automated processing of hydrophone recordings and suitable methods to detect marine mammals from their acoustic vocalisations. Although passive acoustic monitoring is not a novel topic, the detection of marine mammals from their vocalisations using machine learning is still in its infancy. Specifically, detection of the highly endangered North Atlantic right whale (Eubalaena glacialis) is investigated. A large variety of machine learning algorithms are developed and applied to the detection of North Atlantic right whale (NARW) vocalisations with a comparison of methods presented to discover which provides the highest detection accuracy. Convolutional neural networks are found to outperform other machine learning methods and provide the highest detection accuracy when given spectrograms of acoustic recordings for detection. Next, tests investigate the use of both audio and image based enhancements method for improving detection accuracy in noisy conditions. Log spectrogram features and log histogram equalisation features both achieve comparable detection accuracy when tested in clean (noise-free), and noisy conditions. Further work provides an investigation into deep learning denoising approaches, applying both denoising autoencoders and denoising convolutional neural networks to noisy NARW vocalisations. After initial parameter and architecture testing, a full evaluation of tests is presented to compare the denoising autoencoder and denoising convolutional neural network. Additional tests also provide a range of simulated real-world noise conditions with a variety of signal-to-noise ratios (SNRs) for evaluating denoising performance in multiple scenarios. Analysis of results found the denoising autoencoder (DAE) to outperform other methods and had increased accuracy in all conditions when testing on an underlying classifier that has been retrained on the vestigial denoised signal. Tests to evaluate the benefit of augmenting training data were carried out and discovered that augmenting training data for both the denoising autoencoder and convolutional neural network, improved performance and increased detection accuracy for a range of noise types. Furthermore, evaluation using a naturally noisy condition saw an increase in detection accuracy when using a denoising autoencoder, with augmented training and convolutional neural network classifier. This configuration was also timed and deemed capable of running multiple times faster than real-time and likely suitable for deployment on-board an autonomous system
    • …
    corecore