16 research outputs found

    Listening forward: approaching marine biodiversity assessments using acoustic methods

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Mooney, T. A., Di Iorio, L., Lammers, M., Lin, T., Nedelec, S. L., Parsons, M., Radford, C., Urban, E., & Stanley, J. Listening forward: approaching marine biodiversity assessments using acoustic methods. Royal Society Open Science, 7(8), (2020): 201287, doi:10.1098/rsos.201287.Ecosystems and the communities they support are changing at alarmingly rapid rates. Tracking species diversity is vital to managing these stressed habitats. Yet, quantifying and monitoring biodiversity is often challenging, especially in ocean habitats. Given that many animals make sounds, these cues travel efficiently under water, and emerging technologies are increasingly cost-effective, passive acoustics (a long-standing ocean observation method) is now a potential means of quantifying and monitoring marine biodiversity. Properly applying acoustics for biodiversity assessments is vital. Our goal here is to provide a timely consideration of emerging methods using passive acoustics to measure marine biodiversity. We provide a summary of the brief history of using passive acoustics to assess marine biodiversity and community structure, a critical assessment of the challenges faced, and outline recommended practices and considerations for acoustic biodiversity measurements. We focused on temperate and tropical seas, where much of the acoustic biodiversity work has been conducted. Overall, we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. Key needs are preliminary data and sampling sufficiently to capture the patterns and variability of a habitat. Yet with new analytical tools including source separation and supervised machine learning, there is substantial promise in marine acoustic diversity assessment methods.Funding for development of this article was provided by the collaboration of the Urban Coast Institute (Monmouth University, NJ, USA), the Program for the Human Environment (The Rockefeller University, New York, USA) and the Scientific Committee on Oceanic Research. Partial support was provided to T.A.M. from the National Science Foundation grant OCE-1536782

    Automatic detection and classi cation of bird sounds in low-resource wildlife audio datasets

    Get PDF
    PhDThere are many potential applications of automatic species detection and classifi cation of birds from their sounds (e.g. ecological research, biodiversity monitoring, archival). However, acquiring adequately labelled large-scale and longitudinal data remains a major challenge, especially for species-rich remote areas as well as taxa that require expert input for identi fication. So far, monitoring of avian populations has been performed via manual surveying, sometimes even including the help of volunteers due to the challenging scales of the data. In recent decades, there is an increasing amount of ecological audio datasets that have tags assigned to them to indicate the presence or not of a specific c bird species. However, automated species vocalization detection and identifi cation is a challenging task. There is a high diversity of animal vocalisations, both in the types of the basic syllables and in the way they are combined. Also, there is noise present in most habitats, and many bird communities contain multiple bird species that can potentially have overlapping vocalisations. In recent years, machine learning has experienced a strong growth, due to increased dataset sizes and computational power, and to advances in deep learning methods that can learn to make predictions in extremely nonlinear problem settings. However, in training a deep learning system to perform automatic detection and audio tagging of wildlife bird sound scenes, two problems often arise. Firstly, even with the increased amount of audio datasets, most publicly available datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, in practice it is difficult to collect enough samples for most classes of interest. These problems are particularly pressing for wildlife audio but also occur in many other scenarios. In this thesis, we investigate and propose methods to perform audio event detection and classi fication on wildlife bird sound scenes and other low-resource audio datasets, such as methods based on image processing and deep learning. We extend deep learning methods for weakly labelled data in a multi-instance learning and multi task learning setting. We evaluate these methods for simultaneously detecting and classifying large numbers of sound types in audio recorded in the wild and other low resource audio datasets

    Data mining in large audio collections of dolphin signals

    Get PDF
    The study of dolphin cognition involves intensive research of animal vocal- izations recorded in the field. In this dissertation I address the automated analysis of audible dolphin communication. I propose a system called the signal imager that automatically discovers patterns in dolphin signals. These patterns are invariant to frequency shifts and time warping transformations. The discovery algorithm is based on feature learning and unsupervised time series segmentation using hidden Markov models. Researchers can inspect the patterns visually and interactively run com- parative statistics between the distribution of dolphin signals in different behavioral contexts. The required statistics for the comparison describe dolphin communication as a combination of the following models: a bag-of-words model, an n-gram model and an algorithm to learn a set of regular expressions. Furthermore, the system can use the patterns to automatically tag dolphin signals with behavior annotations. My results indicate that the signal imager provides meaningful patterns to the marine biologist and that the comparative statistics are aligned with the biologists’ domain knowledge.Ph.D

    Unsupervised Bioacoustic Segmentation by Hierarchical Dirichlet Process Hidden Markov Model

    No full text
    International audienceBioacoustics is powerful for monitoring biodiversity. We investigate in this paper automatic segmentation model for real-world bioacoustic scenes in order to infer hidden states referred as song units. Nevertheless, the number of these acoustic units is often unknown, unlike in human speech recognition. Hence, we propose a bioacoustic segmentation based on the Hierarchical Dirichlet Process (HDP-HMM), a Bayesian non-parametric (BNP) model to tackle this challenging problem. Hence, we focus our approach on unsupervised learning from bioacous-tic sequences. It consists in simultaneously finding the structure of hidden song units, and automatically infers the unknown number of the hidden states. We investigate two real bioacoustic scenes: whale, and multi-species birds songs. We learn the models using Markov-Chain Monte Carlo (MCMC) sampling techniques on Mel Frequency Cepstral Coefficients (MFCC). Our results, scored by bioacoustic expert, show that the model generates correct song unit segmentation. This study demonstrates new insights for unsupervised analysis of complex soundscapes and illustrates their potential of chunking non-human animal signals into structured units. This can yield to new representations of the calls of a target species, but also to the structuration of inter-species calls. It gives to experts a tracktable approach for efficient bioacoustic research as requested in [3]

    Unsupervised Bioacoustic Segmentation by Hierarchical Dirichlet Process Hidden Markov Model

    No full text
    International audienceBioacoustics is powerful for monitoring biodiversity. We investigate in this paper automatic segmentation model for real-world bioacoustic scenes in order to infer hidden states referred as song units. Nevertheless, the number of these acoustic units is often unknown, unlike in human speech recognition. Hence, we propose a bioacoustic segmentation based on the Hierarchical Dirichlet Process (HDP-HMM), a Bayesian non-parametric (BNP) model to tackle this challenging problem. Hence, we focus our approach on unsupervised learning from bioacous-tic sequences. It consists in simultaneously finding the structure of hidden song units, and automatically infers the unknown number of the hidden states. We investigate two real bioacoustic scenes: whale, and multi-species birds songs. We learn the models using Markov-Chain Monte Carlo (MCMC) sampling techniques on Mel Frequency Cepstral Coefficients (MFCC). Our results, scored by bioacoustic expert, show that the model generates correct song unit segmentation. This study demonstrates new insights for unsupervised analysis of complex soundscapes and illustrates their potential of chunking non-human animal signals into structured units. This can yield to new representations of the calls of a target species, but also to the structuration of inter-species calls. It gives to experts a tracktable approach for efficient bioacoustic research as requested in [3]
    corecore