8 research outputs found

    Spatial Patterns of Primate Electrocutions in Diani, Kenya

    Get PDF

    Testing a global standard for quantifying species recovery and assessing conservation impact.

    Get PDF
    Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a "Green List of Species" (now the IUCN Green Status of Species). A draft Green Status framework for assessing species' progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species' viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species' recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard

    Howler monkey convolutional neural network (CNN) classification model

    No full text
    Machine learning model trained to classify howler monkey vocalisations from acoustic data. Model was trained in Opensoundscape v0.7.0 in python. </p

    Automated detection of gunshots in tropical forests using convolutional neural networks

    No full text
    Unsustainable hunting is one of the leading drivers of global biodiversity loss, yet very few direct measures exist due to the difficulty in monitoring this cryptic activity. Where guns are commonly used for hunting, such as in the tropical forests of the Americas and Africa, acoustic detection can potentially provide a solution to this monitoring challenge. The emergence of low cost autonomous recording units (ARUs) brings into reach the ability to monitor hunting pressure over wide spatial and temporal scales. However, ARUs produce immense amounts of data, and long term and large-scale monitoring is not possible without efficient automated sound classification techniques. We tested the effectiveness of a sequential two-stage detection pipeline for detecting gunshots from acoustic data collected in the tropical forests of Belize. The pipeline involved an on-board detection algorithm which was developed and tested in a prior study, followed by a spectrogram based convolutional neural network (CNN), which was developed in this manuscript. As gunshots are rare events, we focussed on developing a classification pipeline that maximises recall at the cost of increased false positives, with the aim of using the classifier to assist human annotation of files. We trained the CNN on annotated data collected across two study sites in Belize, comprising 597 gunshots and 28,195 background sounds. Predictions from the annotated validation dataset comprising 150 gunshots and 7044 background sounds collected from the same sites yielded a recall of 0.95 and precision of 0.85. The combined recall of the two-step pipeline was estimated at 0.80. We subsequently applied the CNN to an un-annotated dataset of over 160,000 files collected in a spatially distinct study site to test for generalisability and precision under a more realistic monitoring scenario. Our model was able to generalise to this dataset, and classified gunshots with 0.57 precision and estimated 80% recall, producing a substantially more manageable dataset for human verification. Using a classifier-guided listening approach such as ours can make wide scale monitoring of threats such as hunting a feasible option for conservation management

    Tropical forest gunshot classification training audio dataset

    No full text
    DATA SOURCE LOCATION Data were collected in tropical forest sites in central Belize. Data were recorded in Tapir Mountain Nature Reserve (TMNR) and the adjoining Pook&rsquo;s Hill Reserve in Cayo District, Belize [17.150, -88.860] and Manatee Forest Reserve (MFR) and surrounding protected areas in Belize District, Belize [17.260, -88.490]. FOLDERS The folders contain audio files recorded between 2017 and 2021. The &lsquo;Training data&rsquo; folder and the &lsquo;Validation data&rsquo; folder contain two temporally distinct datasets, which can be used for model training and validation. The training folder consists of 80% of the total dataset, and the validation folder comprises the remaining 20%. Within each of these folders are two folders labelled &lsquo;Gunshot&rsquo; and &lsquo;Background&rsquo;. FILES The folders contain 749 gunshot files and over 35,000 background files. The files are in Waveform Audio File Format (wav), and are each 4.09 seconds long. The first 8 alphanumeric characters of the file name corresponds to the UNIX hexadecimal timestamp of the time of recording. Some files contain additional alphanumeric characters after these initial 8 characters, which were used as unique identifying numbers during processing and do not convey any additional information. </span

    Automated detection of gunshots in tropical forests using convolutional neural networks

    No full text
    Unsustainable hunting is one of the leading drivers of global biodiversity loss, yet very few direct measures exist due to the difficulty in monitoring this cryptic activity. Where guns are commonly used for hunting, such as in the tropical forests of the Americas and Africa, acoustic detection can potentially provide a solution to this monitoring challenge. The emergence of low cost autonomous recording units (ARUs) brings into reach the ability to monitor hunting pressure over wide spatial and temporal scales. However, ARUs produce immense amounts of data, and long term and large-scale monitoring is not possible without efficient automated sound classification techniques. We tested the effectiveness of a sequential two-stage detection pipeline for detecting gunshots from acoustic data collected in the tropical forests of Belize. The pipeline involved an on-board detection algorithm which was developed and tested in a prior study, followed by a spectrogram based convolutional neural network (CNN), which was developed in this manuscript. As gunshots are rare events, we focussed on developing a classification pipeline that maximises recall at the cost of increased false positives, with the aim of using the classifier to assist human annotation of files. We trained the CNN on annotated data collected across two study sites in Belize, comprising 597 gunshots and 28,195 background sounds. Predictions from the annotated validation dataset comprising 150 gunshots and 7044 background sounds collected from the same sites yielded a recall of 0.95 and precision of 0.85. The combined recall of the two-step pipeline was estimated at 0.80. We subsequently applied the CNN to an un-annotated dataset of over 160,000 files collected in a spatially distinct study site to test for generalisability and precision under a more realistic monitoring scenario. Our model was able to generalise to this dataset, and classified gunshots with 0.57 precision and estimated 80% recall, producing a substantially more manageable dataset for human verification. Using a classifier-guided listening approach such as ours can make wide scale monitoring of threats such as hunting a feasible option for conservation management
    corecore