68 research outputs found

    Automatic bird species identification employing an unsupervised discovery of vocalisation units

    Get PDF
    An automatic analysis of bird vocalisations for the identification of bird species, the study of their behaviour and their means of communication is important for a better understanding of the environment in which we are living and in the context of environmental protection. The high variability of vocalisations within different individuals makes species’ identification challenging for bird surveyors. Hence, the availability of a reliable automatic bird identification system through their vocalisations, would be of great interest to professionals and amateurs alike. A part of this thesis provides a biological survey on the scientific theories of the study of bird vocalisation and corresponding singing behaviours. Another section of this thesis aims to discover a set of element patterns produced by each bird species in a large corpus of the natural field recordings. Also this thesis aims to develop an automatic system for the identification of bird species from recordings. Two HMM based recognition systems are presented in this research. Evaluations have been demonstrated where the proposed element based HMM system obtained a recognition accuracy of over 93% by using 3 seconds of detected signal and over 39% recognition error rate reduction, compared to the baseline HMM system of the same complexity

    Automated classification of humpback whale (Megaptera novaeangliae) songs using hidden Markov models

    No full text
    Humpback whales songs have been widely investigated in the past few decades. This study proposes a new approach for the classification of the calls detected in the songs with the use of Hidden Markov Models (HMMs). HMMs have been used once before for such task but in an unsupervised algorithm with promising results. Here HMMs were trained and two models were employed to classify the calls into their component units and subunits. The results show that classification of humpback whale songs from one year to another is possible even with limited training. The classification is fully automated apart from the labelling of the training set and the input of the initial HMM prototype models. Two different models for the song structure are considered: one based on song units and one based on subunits. The latter model is shown to achieve better recognition results with a reduced need for updating when applied to a variety of recordings from different years and different geographic locations

    Acoustic classification of Australian frogs for ecosystem survey

    Get PDF
    Novel bioacoustics signal processing techniques have been developed to classify frog vocalisations in both trophy and field recordings. The research is useful in helping ecologists monitor frog community activity and species richness over long-term. Two major contributions are the construction of novel feature descriptors in the Cepstral domain, and the design of novel classification systems for multiple simultaneously vocalising frog species

    Bird species recognition using unsupervised modeling of individual vocalization elements

    Get PDF

    New horizons for female birdsong : evolution, culture and analysis tools : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Ecology at Massey University, Auckland, New Zealand

    Get PDF
    Published papers appear in Appendix 7.1. and 7.2 respectively under a CC BY 4.0 and CC BY licence: Webb, W. H., Brunton, D. H., Aguirre, J. D., Thomas, D. B., Valcu, M., & Dale, J. (2016). Female song occurs in songbirds with more elaborate female coloration and reduced sexual dichromatism. Frontiers in Ecology and Evolution, 4(22). https://doi.org/10.3389/fevo.2016.00022 Yukio Fukuzawa, Wesley Webb, Matthew Pawley, Michelle Roper, Stephen Marsland, Dianne Brunton, & Andrew Gilman. (2020). Koe: Web-based software to classify acoustic units and analyse sequence structure in animal vocalisations. Methods in Ecology and Evolution, 11(3). https://doi.org/10.1111/2041-210X.13336As a result of male-centric, northern-hemisphere-biased sexual selection theory, elaborate female traits in songbirds have been largely overlooked as unusual or non-functional by-products of male evolution. However, recent research has revealed that female song is present in most surveyed songbirds and was in fact the ancestral condition to the clade. Additionally, a high proportion of songbird species have colourful females, and both song and showy colours have demonstrated female-specific functions in a growing number of species. We have much to learn about the evolution and functions of elaborate female traits in general, and female song in particular. This thesis extends the horizons of female birdsong research in three ways: (1) by revealing the broad-scale evolutionary relationship of female song and plumage elaboration across the songbirds, (2) by developing new accessible tools for the measurement and analysis of song complexity, and (3) by showing—through a detailed field study on a large natural metapopulation—how vocal culture operates differentially in males and females. First, to understand the drivers of elaborate female traits, I tested the evolutionary relationship between female song presence and plumage colouration across the songbirds. I found strong support for a positive evolutionary correlation between traits, with female song more prevalent amongst species with elaborated female plumage. These results suggest that contrary to the idea of trade-off between showy traits, female plumage colouration and female song likely evolved together under similar selection pressures and that their respective functions are reinforcing. Second, I introduce new bioacoustics software, Koe, designed to meet the need for detailed classification and analysis of song complexity. The program enables visualisation, segmentation, rapid classification and analysis of song structure. I demonstrate Koe with a case study of New Zealand bellbird Anthornis melanura song, showcasing the capabilities for large-scale bioacoustics research and its application to female song. Third, I conducted one of the first detailed field-based analyses of female song culture, studying an archipelago metapopulation of New Zealand bellbirds. Comparing between male and female sectors of each population, I found equal syllable diversity, largely separate repertoires, and contrasting patterns of sharing between sites—revealing female dialects and pronounced sex differences in cultural evolution. By combining broad-scale evolutionary approaches, novel song analysis tools, and a detailed field study, this thesis demonstrates that female song can be as much an elaborate signal as male song. I describe how future work can build on these findings to expand understanding of elaborate female traits

    Automatic detection and classi cation of bird sounds in low-resource wildlife audio datasets

    Get PDF
    PhDThere are many potential applications of automatic species detection and classifi cation of birds from their sounds (e.g. ecological research, biodiversity monitoring, archival). However, acquiring adequately labelled large-scale and longitudinal data remains a major challenge, especially for species-rich remote areas as well as taxa that require expert input for identi fication. So far, monitoring of avian populations has been performed via manual surveying, sometimes even including the help of volunteers due to the challenging scales of the data. In recent decades, there is an increasing amount of ecological audio datasets that have tags assigned to them to indicate the presence or not of a specific c bird species. However, automated species vocalization detection and identifi cation is a challenging task. There is a high diversity of animal vocalisations, both in the types of the basic syllables and in the way they are combined. Also, there is noise present in most habitats, and many bird communities contain multiple bird species that can potentially have overlapping vocalisations. In recent years, machine learning has experienced a strong growth, due to increased dataset sizes and computational power, and to advances in deep learning methods that can learn to make predictions in extremely nonlinear problem settings. However, in training a deep learning system to perform automatic detection and audio tagging of wildlife bird sound scenes, two problems often arise. Firstly, even with the increased amount of audio datasets, most publicly available datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, in practice it is difficult to collect enough samples for most classes of interest. These problems are particularly pressing for wildlife audio but also occur in many other scenarios. In this thesis, we investigate and propose methods to perform audio event detection and classi fication on wildlife bird sound scenes and other low-resource audio datasets, such as methods based on image processing and deep learning. We extend deep learning methods for weakly labelled data in a multi-instance learning and multi task learning setting. We evaluate these methods for simultaneously detecting and classifying large numbers of sound types in audio recorded in the wild and other low resource audio datasets

    Algorithmic Analysis of Complex Audio Scenes

    Get PDF
    In this thesis, we examine the problem of algorithmic analysis of complex audio scenes with a special emphasis on natural audio scenes. One of the driving goals behind this work is to develop tools for monitoring the presence of animals in areas of interest based on their vocalisations. This task, which often occurs in the evaluation of nature conservation measures, leads to a number of subproblems in audio scene analysis. In order to develop and evaluate pattern recognition algorithms for animal sounds, a representative collection of such sounds is necessary. Building such a collection is beyond the scope of a single researcher and we therefore use data from the Animal Sound Archive of the Humboldt University of Berlin. Although a large portion of well annotated recordings from this archive has been available in digital form, little infrastructure for searching and sharing this data has been available. We describe a distributed infrastructure for searching, sharing and annotating animal sound collections collaboratively, which we have developed in this context. Although searching animal sound databases by metadata gives good results for many applications, annotating all occurences of a specific sound is beyond the scope of human annotators. Moreover, finding similar vocalisations to that of an example is not feasible by using only metadata. We therefore propose an algorithm for content-based similarity search in animal sound databases. Based on principles of image processing, we develop suitable features for the description of animal sounds. We enhance a concept for content-based multimedia retrieval by a ranking scheme which makes it an efficient tool for similarity search. One of the main sources of complexity in natural audio scenes, and the most difficult problem for pattern recognition, is the large number of sound sources which are active at the same time. We therefore examine methods for source separation based on microphone arrays. In particular, we propose an algorithm for the extraction of simpler components from complex audio scenes based on a sound complexity measure. Finally, we introduce pattern recognition algorithms for the vocalisations of a number of bird species. Some of these species are interesting for reasons of nature conservation, while one of the species serves as a prototype for song birds with strongly structured songs.Algorithmische Analyse Komplexer Audioszenen In dieser Arbeit untersuchen wir das Problem der Analyse komplexer Audioszenen mit besonderem Augenmerk auf natürliche Audioszenen. Eine der treibenden Zielsetzungen hinter dieser Arbeit ist es Werkzeuge zu entwickeln, die es erlauben ein auf Lautäußerungen basierendes Monitoring von Tierarten in Zielregionen durchzuführen. Diese Aufgabenstellung, die häufig in der Evaluation von Naturschutzmaßnahmen auftritt, führt zu einer Anzahl von Unterproblemen innerhalb der Audioszenen-Analyse. Eine wichtige Voraussetzung um Mustererkennungs-Algorithmen für Tierstimmen entwickeln zu können, ist die Verfügbarkeit großer Sammlungen von Aufnahmen von Tierstimmen. Eine solche Sammlung aufzubauen liegt jenseits der Möglichkeiten eines einzelnen Forschers und wir verwenden daher Daten des Tierstimmenarchivs der Humboldt Universität Berlin. Obwohl eine große Anzahl gut annotierter Aufnahmen in diesem Archiv in digitaler Form vorlagen, gab es nur wenig unterstützende Infrastruktur um diese Daten durchsuchen und verteilen zu können. Wir beschreiben eine verteilte Infrastruktur, mit deren Hilfe es möglich ist Tierstimmen-Sammlungen zu durchsuchen, sowie gemeinsam zu verwenden und zu annotieren, die wir in diesem Kontext entwickelt haben. Obwohl das Durchsuchen von Tierstimmen-Datenbank anhand von Metadaten für viele Anwendungen gute Ergebnisse liefert, liegt es jenseits der Möglichkeiten menschlicher Annotatoren alle Vorkommen eines bestimmten Geräuschs zu annotieren. Darüber hinaus ist es nicht möglich einem Beispiel ähnlich klingende Geräusche nur anhand von Metadaten zu finden. Deshalb schlagen wir einen Algorithmus zur inhaltsbasierten Ähnlichkeitssuche in Tierstimmen-Datenbanken vor. Ausgehend von Methoden der Bildverarbeitung entwickeln wir geeignete Merkmale für die Beschreibung von Tierstimmen. Wir erweitern ein Konzept zur inhaltsbasierten Multimedia-Suche um ein Ranking-Schema, dass dieses zu einem effizienten Werkzeug für die Ähnlichkeitssuche macht. Eine der grundlegenden Quellen von Komplexität in natürlichen Audioszenen, und das schwierigste Problem für die Mustererkennung, stellt die hohe Anzahl gleichzeitig aktiver Geräuschquellen dar. Deshalb untersuchen wir Methoden zur Quellentrennung, die auf Mikrofon-Arrays basieren. Insbesondere schlagen wir einen Algorithmus zur Extraktion einfacherer Komponenten aus komplexen Audioszenen vor, der auf einem Maß für die Komplexität von Audioaufnahmen beruht. Schließlich führen wir Mustererkennungs-Algorithmen für die Lautäußerungen einer Reihe von Vogelarten ein. Einige dieser Arten sind aus Gründen des Naturschutzes interessant, während eine Art als Prototyp für Singvögel mit stark strukturierten Gesängen dient

    Classification and ranking of environmental recordings to facilitate efficient bird surveys

    Get PDF
    This thesis contributes novel computer-assisted techniques to facilitating bird species surveys from a large number of environmental audio recordings. These techniques are applicable to both manual and automated recognition of bird species by removing irrelevant audio data and prioritising those relevant data for efficient bird species detection. This work also represents a significant step towards using automated techniques to support experts and the general public to explore and gain a better understanding of vocal species

    Automatic recognition of bird species by their sounds

    Get PDF
    Lintujen äänet jaetaan niiden tehtävän perusteella lauluihin ja kutsuääniin, jotka edelleen jaetaan hierarkisen tason perusteella virkkeisiin, tavuihin ja elementteihin. Näistä tavu on sopiva yksikkö lajitunnistukseen. Erityyppisten äänten kirjo linnuilla on laaja. Tässä työssä keskitytään ääniin, jotka määritellään epäharmonisiksi. Tässä työssä käytettävä lintulajien automaattinen tunnistusjärjestelmä sisältää seuraavat vaiheet: tavujen segmentointi, piirteiden irrotus sekä luokittelijan opetus ja arviointi. Kaikki lajitunnistuskokeilut perustuvat tavujen parametriseen esitykseen käyttäen 19:ta matalan tason äänisignaalin parametria. Tunnistuskokeet toteutettiin kuudella lajilla, jotka tuottavat usein epäharmonisia ääniä. Tulosten perusteella piirteet, jotka liittyvät äänten taajuuskaistaan ja -sisältöön luokittelevat hyvin nämä äänet.Bird sounds are divided by their function into songs and calls which are further divided into hierarchical levels of phrases, syllables and elements. It is shown that syllable is suitable unit for recognition of bird species. Diversity within different types of syllables birds are able to produce is large. In this thesis main focus is sounds that are defined inharmonic. Automatic recognition system for bird species used in this thesis consist of segmentation of syllables, feature generation, classifier design and classifier evaluation phases. Recognition experinments are based on parametric representation of syllables using a total of 19 low level acoustical signal parameters. Simulation experinments were executed with six species that regularly produce inharmonic sounds. Results shows that features related to the frequency band and content of the sound provide good discrimination ability within these sounds

    ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning

    Get PDF
    Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days. It achieved a time-based precision or positive-predictive-value (PPV) of 93.2% and an area-under-the-curve (AUC) of 0.9523. This approach enables an automated annotation procedure of large bioacoustics databases to extract killer whale sounds, which are essential for subsequent identification of significant communication patterns. The code will be publicly available in October 2019 to support the application of deep learning to bioaoucstic research. ORCA-SPOT can be adapted to other animal species
    corecore