2 research outputs found

    Use of specialized questioning techniques to detect decline in giraffe meat consumption

    Get PDF
    Biodiversity conservation depends on influencing human behaviors, but when activities are illegal or otherwise sensitive, e.g. because the behavior in question is taboo to a particular society, actors can be hesitant to admit engagement with illicit behaviors. We applied Specialized Questioning Techniques (SQT) to estimate and compare the behavioral prevalence of giraffe meat consumption from 2017 to 2019 in northern Kenya, Laikipia and Samburu County, between direct questioning and two SQTs: Randomized Response Technique (RRT) and Unmatched Count Technique (UCT). Comparisons between the two samples (2017 and 2019) yielded significant differences across all three methods, with confidence intervals distinctly divergent between years. The significant disparity between the two samples for all three methods suggests that there was a true reduction in giraffe meat usage in our study area, from 2017 to 2019. A key change in the study area between the two time periods was the introduction of a community-based program for giraffe conservation. Primary program activities, including ecological monitoring, community outreach and education, and collaboration with wildlife security teams, align with other conservation programs that have demonstrated reduced poaching pressures. This study demonstrates an application of SQTs to detect a decline of giraffe meat consumption, providing an alternative to self-reported data for monitoring sensitive behaviors related to direct exploitation and illegal uses of wildlife

    Camera settings and biome influence the accuracy of citizen science approaches to camera trap image classification

    Get PDF
    Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data
    corecore