33 research outputs found

    ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning

    Get PDF
    Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days. It achieved a time-based precision or positive-predictive-value (PPV) of 93.2% and an area-under-the-curve (AUC) of 0.9523. This approach enables an automated annotation procedure of large bioacoustics databases to extract killer whale sounds, which are essential for subsequent identification of significant communication patterns. The code will be publicly available in October 2019 to support the application of deep learning to bioaoucstic research. ORCA-SPOT can be adapted to other animal species

    A Methodology Based on Bioacoustic Information for Automatic Identification of Reptiles and Anurans

    Get PDF
    Nowadays, human activity is considered one of the main risk factors for the life of reptiles and amphibians. The presence of these living beings represents a good biological indicator of an excellent environmental quality. Because of their behavior and size, most of these species are complicated to recognize in their living environment with image devices. Nevertheless, the use of bioacoustic information to identify animal species is an efficient way to sample populations and control the conservation of these living beings in large and remote areas where environmental conditions and visibility are limited. In this chapter, a novel methodology for the identification of different reptile and anuran species based on the fusion of Mel and Linear Frequency Cepstral Coefficients, MFCC and LFCC, is presented. The proposed methodology has been validated using public databases, and experimental results yielded an accuracy above 95% showing the efficiency of the proposal

    Multi-Label Classifier Chains for Bird Sound

    Full text link
    Bird sound data collected with unattended microphones for automatic surveys, or mobile devices for citizen science, typically contain multiple simultaneously vocalizing birds of different species. However, few works have considered the multi-label structure in birdsong. We propose to use an ensemble of classifier chains combined with a histogram-of-segments representation for multi-label classification of birdsong. The proposed method is compared with binary relevance and three multi-instance multi-label learning (MIML) algorithms from prior work (which focus more on structure in the sound, and less on structure in the label sets). Experiments are conducted on two real-world birdsong datasets, and show that the proposed method usually outperforms binary relevance (using the same features and base-classifier), and is better in some cases and worse in others compared to the MIML algorithms.Comment: 6 pages, 1 figure, submission to ICML 2013 workshop on bioacoustics. Note: this is a minor revision- the blind submission format has been replaced with one that shows author names, and a few corrections have been mad

    Deep neural networks for automated detection of marine mammal species

    Get PDF
    Authors thank the Bureau of Ocean Energy Management for the funding of MARU deployments, Excelerate Energy Inc. for the funding of Autobuoy deployment, and Michael J. Weise of the US Office of Naval Research for support (N000141712867).Deep neural networks have advanced the field of detection and classification and allowed for effective identification of signals in challenging data sets. Numerous time-critical conservation needs may benefit from these methods. We developed and empirically studied a variety of deep neural networks to detect the vocalizations of endangered North Atlantic right whales (Eubalaena glacialis). We compared the performance of these deep architectures to that of traditional detection algorithms for the primary vocalization produced by this species, the upcall. We show that deep-learning architectures are capable of producing false-positive rates that are orders of magnitude lower than alternative algorithms while substantially increasing the ability to detect calls. We demonstrate that a deep neural network trained with recordings from a single geographic region recorded over a span of days is capable of generalizing well to data from multiple years and across the species’ range, and that the low false positives make the output of the algorithm amenable to quality control for verification. The deep neural networks we developed are relatively easy to implement with existing software, and may provide new insights applicable to the conservation of endangered species.Publisher PDFPeer reviewe

    Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach

    Get PDF
    Although field-collected recordings typically contain multiple simultaneously vocalizing birds of different species, acoustic species classification in this setting has received little study so far. This work formulates the problem of classifying the set of species present in an audio recording using the multi-instance multi-label (MIML) framework for machine learning, and proposes a MIML bag generator for audio, i.e., an algorithm which transforms an input audio signal into a bag-of-instances representation suitable for use with MIML classifiers. The proposed representation uses a 2D time-frequency segmentation of the audio signal, which can separate bird sounds that overlap in time. Experiments using audio data containing 13 species collected with unattended omnidirectional microphones in the H. J. Andrews Experimental Forest demonstrate that the proposed methods achieve high accuracy (96.1% true positives/negatives). Automated detection of bird species occurrence using MIML has many potential applications, particularly in long-term monitoring of remote sites, species distribution modeling, and conservation planning

    Recognition of Multiple Bird Species based on Penalised Maximum Likelihood and HMM-based Modelling of Individual Elements

    Get PDF

    Methods for the automatic recording of bird calls and songs in field ornithology

    Get PDF
    Der gegenwärtige Kenntnisstand über automatisierte Methoden zur akustischen Erfassung von Rufen und Gesängen von Vögeln wird dargelegt. Die Grundlage für eine automatisierte Erfassung bilden Langzeitaufzeichnungen. Es wird der Frage nachgegangen, inwiefern Tonaufzeichnungen für eine qualitative und auch quantitative Analyse von Vogelbeständen geeignet sind. Spezielles Augenmerk wird autonomen Aufzeichnungsmethoden und der Auswertung von Langzeitaufzeichnungen unter Nutzung von Algorithmen der akustischen Mustererkennung gewidmet. Sinnvolle Einsatzszenarien für automatisierte Methoden im Rahmen avifaunistischer Feldforschung sind die Erfassung des nächtlichen Vogelzuges, die Erfassung nachtaktiver Brutvogelarten und die Datenerhebung in Kernzonen von Schutzgebieten.This review presents our current knowledge on automated methods for acoustic recording of calls and songs of birds. Acoustic long-term recordings can serve as a basis for an automated bird census. We stress the question of whether sound recordings are suitable for qualitative and quantitative analysis of bird populations. Special attention is devoted to autonomous recording methods and the evaluation of long-term recordings by use of acoustic pattern recognition algorithms. Realistic scenarios for the use of automated methods in field ornithology we see in the investigation of nocturnal bird migration, the census of nocturnal bird species, and data collection in core areas of nature reserves

    Bird species recognition using unsupervised modeling of individual vocalization elements

    Get PDF
    corecore