135,527 research outputs found

    Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    Get PDF
    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data

    Automatic Recognition of Mammal Genera on Camera-Trap Images using Multi-Layer Robust Principal Component Analysis and Mixture Neural Networks

    Full text link
    The segmentation and classification of animals from camera-trap images is due to the conditions under which the images are taken, a difficult task. This work presents a method for classifying and segmenting mammal genera from camera-trap images. Our method uses Multi-Layer Robust Principal Component Analysis (RPCA) for segmenting, Convolutional Neural Networks (CNNs) for extracting features, Least Absolute Shrinkage and Selection Operator (LASSO) for selecting features, and Artificial Neural Networks (ANNs) or Support Vector Machines (SVM) for classifying mammal genera present in the Colombian forest. We evaluated our method with the camera-trap images from the Alexander von Humboldt Biological Resources Research Institute. We obtained an accuracy of 92.65% classifying 8 mammal genera and a False Positive (FP) class, using automatic-segmented images. On the other hand, we reached 90.32% of accuracy classifying 10 mammal genera, using ground-truth images only. Unlike almost all previous works, we confront the animal segmentation and genera classification in the camera-trap recognition. This method shows a new approach toward a fully-automatic detection of animals from camera-trap images

    Revealing kleptoparasitic and predatory tendencies in an African mammal community using camera traps: a comparison of spatiotemporal approaches

    Get PDF
    Camera trap data are increasingly being used to characterise relationships between the spatiotemporal activity patterns of sympatric mammal species, often with a view to inferring inter-specific interactions. In this context, we attempted to characterise the kleptoparasitic and predatory tendencies of spotted hyaenas Crocuta crocuta and lions Panthera leo from photographic data collected across 54 camera trap stations and two dry seasons in Tanzania's Ruaha National Park. We applied four different methods of quantifying spatiotemporal associations, including one strictly temporal approach (activity pattern overlap), one strictly spatial approach (co-occupancy modelling), and two spatiotemporal approaches (co-detection modelling and temporal spacing at shared camera trap sites). We expected a kleptoparasitic relationship between spotted hyaenas and lions to result in a positive spatiotemporal association, and further hypothesised that the association between lions and their favourite prey in Ruaha, the giraffe Giraffa camelopardalis and the zebra Equus quagga, would be stronger than those observed with non-preferred prey species (the impala Aepyceros melampus and the dikdik Madoqua kirkii). Only approaches incorporating both the temporal and spatial components of camera trap data resulted in significant associative patterns. The latter were particularly sensitive to the temporal resolution chosen to define species detections (i.e. occasion length), and only revealed a significant positive association between lion on spotted hyaena detections, as well as a tendency for both species to follow each other at camera trap sites, during the dry season of 2013, but not that of 2014. In both seasons, observed spatiotemporal associations between lions and each of the four herbivore species considered provided no convincing or consistent indications of any predatory preferences. Our study suggests that, when making inferences on inter-specific interactions from camera trap data, due regards should be given to the potential behavioural and methodological processes underlying observed spatiotemporal patterns

    WiseEye: next generation expandable and programmable camera trap platform for wildlife research

    Get PDF
    Funding: The work was supported by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1. The work of S. Newey and RJI was part funded by the Scottish Government's Rural and Environment Science and Analytical Services (RESAS). Details published as an Open Source Toolkit, PLOS Journals at: http://dx.doi.org/10.1371/journal.pone.0169758Peer reviewedPublisher PD

    Position clamping in a holographic counterpropagating optical trap

    Get PDF
    Optical traps consisting of two counterpropagating, divergent beams of light allow relatively high forces to be exerted along the optical axis by turning off one beam, however the axial stiffness of the trap is generally low due to the lower numerical apertures typically used. Using a high speed spatial light modulator and CMOS camera, we demonstrate 3D servocontrol of a trapped particle, increasing the stiffness from 0.004 to 1.5μNm<sup>−1</sup>. This is achieved in the “macro-tweezers” geometry [Thalhammer, J. Opt. 13, 044024 (2011); Pitzek, Opt. Express 17, 19414 (2009)], which has a much larger field of view and working distance than single-beam tweezers due to its lower numerical aperture requirements. Using a 10×, 0.2NA objective, active feedback produces a trap with similar effective stiffness to a conventional single-beam gradient trap, of order 1μNm<sup>−1</sup> in 3D. Our control loop has a round-trip latency of 10ms, leading to a resonance at 20Hz. This is sufficient bandwidth to reduce the position fluctuations of a 10μm bead due to Brownian motion by two orders of magnitude. This approach can be trivially extended to multiple particles, and we show three simultaneously position-clamped beads
    corecore