8,545 research outputs found

    ClassifyMe: A Field-Scouting Software for the Identification of Wildlife in Camera Trap Images

    Get PDF
    We present ClassifyMe a software tool for the automated identification of animal species from camera trap images. ClassifyMe is intended to be used by ecologists both in the field and in the office. Users can download a pre-trained model specific to their location of interest and then upload the images from a camera trap to a laptop or workstation. ClassifyMe will identify animals and other objects (e.g., vehicles) in images, provide a report file with the most likely species detections, and automatically sort the images into sub-folders corresponding to these species categories. False Triggers (no visible object present) will also be filtered and sorted. Importantly, the ClassifyMe software operates on the user's local machine (own laptop or workstation) - not via internet connection. This allows users access to state-of-the-art camera trap computer vision software in situ, rather than only in the office. The software also incurs minimal cost on the end-user as there is no need for expensive data uploads to cloud services. Furthermore, processing the images locally on the users' end-device allows them data control and resolves privacy issues surrounding transfer and third-party access to users' datasets

    The iWildCam 2018 Challenge Dataset

    Get PDF
    Camera traps are a valuable tool for studying biodiversity, but research using this data is limited by the speed of human annotation. With the vast amounts of data now available it is imperative that we develop automatic solutions for annotating camera trap data in order to allow this research to scale. A promising approach is based on deep networks trained on human-annotated images. We provide a challenge dataset to explore whether such solutions generalize to novel locations, since systems that are trained once and may be deployed to operate automatically in new locations would be most useful.Comment: Challenge hosted at the fifth Fine-Grained Visual Categorization Workshop (FGVC5) at CVPR 201

    Monitoring wild animal communities with arrays of motion sensitive camera traps

    Get PDF
    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a broad range of species providing location -specific information on movement and behavior. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience with a terrestrial animal monitoring system at Barro Colorado Island, Panama. Our camera network captured the spatio-temporal dynamics of terrestrial bird and mammal activity at the site - data relevant to immediate science questions, and long-term conservation issues. We believe that the experience gained and lessons learned during our year long deployment and testing of the camera traps as well as the developed solutions are applicable to broader sensor network applications and are valuable for the advancement of the sensor network research. We suggest that the continued development of these hardware, software, and analytical tools, in concert, offer an exciting sensor-network solution to monitoring of animal populations which could realistically scale over larger areas and time spans

    The iWildCam 2019 Challenge Dataset

    Get PDF
    Camera Traps (or Wild Cams) enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor biodiversity and population density of animal species. The computer vision community has been making strides towards automating the species classification challenge in camera traps, but as we try to expand the scope of these models from specific regions where we have collected training data to different areas we are faced with an interesting problem: how do you classify a species in a new region that you may not have seen in previous training data? In order to tackle this problem, we have prepared a dataset and challenge where the training data and test data are from different regions, namely The American Southwest and the American Northwest. We use the Caltech Camera Traps dataset, collected from the American Southwest, as training data. We add a new dataset from the American Northwest, curated from data provided by the Idaho Department of Fish and Game (IDFG), as our test dataset. The test data has some class overlap with the training data, some species are found in both datasets, but there are both species seen during training that are not seen during test and vice versa. To help fill the gaps in the training species, we allow competitors to utilize transfer learning from two alternate domains: human-curated images from iNaturalist and synthetic images from Microsoft's TrapCam-AirSim simulation environment

    WiseEye: next generation expandable and programmable camera trap platform for wildlife research

    Get PDF
    Funding: The work was supported by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1. The work of S. Newey and RJI was part funded by the Scottish Government's Rural and Environment Science and Analytical Services (RESAS). Details published as an Open Source Toolkit, PLOS Journals at: http://dx.doi.org/10.1371/journal.pone.0169758Peer reviewedPublisher PD

    Automatic Recognition of Mammal Genera on Camera-Trap Images using Multi-Layer Robust Principal Component Analysis and Mixture Neural Networks

    Full text link
    The segmentation and classification of animals from camera-trap images is due to the conditions under which the images are taken, a difficult task. This work presents a method for classifying and segmenting mammal genera from camera-trap images. Our method uses Multi-Layer Robust Principal Component Analysis (RPCA) for segmenting, Convolutional Neural Networks (CNNs) for extracting features, Least Absolute Shrinkage and Selection Operator (LASSO) for selecting features, and Artificial Neural Networks (ANNs) or Support Vector Machines (SVM) for classifying mammal genera present in the Colombian forest. We evaluated our method with the camera-trap images from the Alexander von Humboldt Biological Resources Research Institute. We obtained an accuracy of 92.65% classifying 8 mammal genera and a False Positive (FP) class, using automatic-segmented images. On the other hand, we reached 90.32% of accuracy classifying 10 mammal genera, using ground-truth images only. Unlike almost all previous works, we confront the animal segmentation and genera classification in the camera-trap recognition. This method shows a new approach toward a fully-automatic detection of animals from camera-trap images

    Factors affecting the identification of individual mountain bongo antelope

    Get PDF
    The recognition of individuals forms the basis of many endangered species monitoring protocols. This process typically relies on manual recognition techniques. This study aimed to calculate a measure of the error rates inherent within the manual technique and also sought to identify visual traits that aid identification, using the critically endangered mountain bongo, Tragelaphus eurycerus isaaci, as a model system. Identification accuracy was assessed with a matching task that required same/different decisions to side-by-side pairings of individual bongos. Error rates were lowest when only the flanks of bongos were shown, suggesting that the inclusion of other visual traits confounded accuracy. Accuracy was also higher for photographs of captive animals than camera-trap images, and in observers experienced in working with mountain bongos, than those unfamiliar with the sub-species. These results suggest that the removal of non-essential morphological traits from photographs of bongos, the use of high-quality images, and relevant expertise all help increase identification accuracy. Finally, given the rise in automated identification and the use of citizen science, something our results would suggest is applicable within the context of the mountain bongo, this study provides a framework for assessing their accuracy in individual as well as species identification
    • …
    corecore