21 research outputs found
Sherlock - A flexible, low-resource tool for processing camera-trapping images
1. The use of camera traps to study wildlife has increased markedly in the last two decades. Camera surveys typically produce large data sets which require processing to isolate images containing the species of interest. This is time consuming and costly, particularly if there are many empty images that can result from false triggers. Computer vision technology can assist with data processing, but existing artificial intelligence algorithms are limited by the requirement of a training data set, which itself can be challenging to acquire. Furthermore, deep-learning methods often require powerful hardware and proficient coding skills.
2. We present Sherlock, a novel algorithm that can reduce the time required to process camera trap data by removing a large number of unwanted images. The code is adaptable, simple to use and requires minimal processing power.3. We tested Sherlock on 240,596 camera trap images collected from 46 cameras placed in a range of habitats on farms in Cornwall, United Kingdom, and set the parameters to find European badgers (Meles meles). The algorithm correctly classified 91.9% of badger images and removed 49.3% of the unwanted ‘empty’ images. When testing model parameters, we found that faster processing times were achieved by reducing both the number of sampled pixels and ‘bouncing’ attempts (the number of paths explored to identify a disturbance), with minimal implications for model sensitivity and specificity. When Sherlock was tested on two sites which contained no livestock in their images, its performance greatly improved and it removed 92.3% of the empty images.
4. Although further refinements may improve its performance, Sherlock is currently an accessible, simple and useful tool for processing camera trap data
Farmer‐led badger vaccination in Cornwall: Epidemiological patterns and social perspectives
In the United Kingdom, the management of bovine tuberculosis (bTB) challenges the coexistence of people and wildlife. Control of this cattle disease is hindered by transmission of its causative agent, Mycobacterium bovis, between cattle and badgers Meles meles. Badger culling has formed an element of bTB control policy for decades, but current government policy envisions expanding badger vaccination. Farming leaders are sceptical, citing concerns that badger vaccination would be impractical and potentially ineffective. We report on a 4‐year badger vaccination initiative in an 11 km2 area which, atypically, was initiated by local farmers, delivered by scientists and conservationists, and co‐funded by all three. Participating landholders cited controversies around culling and a desire to support neighbours as their primary reasons for adopting vaccination. The number of badgers vaccinated per km2 (5.6 km−2 in 2019) exceeded the number culled on nearby land (2.9 km−2 in 2019), and the estimated proportion vaccinated (74%, 95% confidence interval [CI] 40%–137%) exceeded the 30% threshold predicted by models to be necessary to control M. bovis. Farmers were content with how vaccination was delivered, and felt that it built trust with wildlife professionals. The percentage of badgers testing positive for M. bovis declined from 16.0% (95% CI 4.5%–36.1%) at the start of vaccination to 0% (95% CI 0%–9.7%) in the final year. With neither replication nor unvaccinated controls, this small‐scale case study does not demonstrate a causal link between badger vaccination and bTB epidemiology, but it does suggest that larger‐scale evaluation of badger vaccination would be warranted. Farmers reported that their enthusiasm for badger vaccination had increased after participating for 4 years. They considered vaccination to have been effective, and good value for money, and wished to continue with it. Synthesis and applications: Although small‐scale, this case study suggests that badger vaccination can be a technically effective and socially acceptable component of bTB control. A wider rollout of badger vaccination is more likely if it is led by the farming community, rather than by conservationists or government, and is combined with scientific monitoring. Read the free Plain Language Summary for this article on the Journal blog
Farmer‐led badger vaccination in Cornwall: Epidemiological patterns and social perspectives
In the United Kingdom, the management of bovine tuberculosis (bTB) challenges the coexistence of people and wildlife. Control of this cattle disease is hindered by transmission of its causative agent, Mycobacterium bovis, between cattle and badgers Meles meles. Badger culling has formed an element of bTB control policy for decades, but current government policy envisions expanding badger vaccination. Farming leaders are sceptical, citing concerns that badger vaccination would be impractical and potentially ineffective. We report on a 4‐year badger vaccination initiative in an 11 km2 area which, atypically, was initiated by local farmers, delivered by scientists and conservationists, and co‐funded by all three. Participating landholders cited controversies around culling and a desire to support neighbours as their primary reasons for adopting vaccination. The number of badgers vaccinated per km2 (5.6 km−2 in 2019) exceeded the number culled on nearby land (2.9 km−2 in 2019), and the estimated proportion vaccinated (74%, 95% confidence interval [CI] 40%–137%) exceeded the 30% threshold predicted by models to be necessary to control M. bovis. Farmers were content with how vaccination was delivered, and felt that it built trust with wildlife professionals. The percentage of badgers testing positive for M. bovis declined from 16.0% (95% CI 4.5%–36.1%) at the start of vaccination to 0% (95% CI 0%–9.7%) in the final year. With neither replication nor unvaccinated controls, this small‐scale case study does not demonstrate a causal link between badger vaccination and bTB epidemiology, but it does suggest that larger‐scale evaluation of badger vaccination would be warranted. Farmers reported that their enthusiasm for badger vaccination had increased after participating for 4 years. They considered vaccination to have been effective, and good value for money, and wished to continue with it. Synthesis and applications: Although small‐scale, this case study suggests that badger vaccination can be a technically effective and socially acceptable component of bTB control. A wider rollout of badger vaccination is more likely if it is led by the farming community, rather than by conservationists or government, and is combined with scientific monitoring. Read the free Plain Language Summary for this article on the Journal blog
Sherlock Camera Trap Dataset
<h2>Data</h2><p>Base dataset containing the camera trap images used for the 'small' and 'paired' tests of Sherlock, andSherlock code.</p><p>Camera trap images used for the larger 'main' test can be found in the following datasets:</p><p>https://zenodo.org/uploads/10023354</p><p>https://zenodo.org/uploads/10026321</p><p>https://zenodo.org/uploads/10036325</p><p>https://zenodo.org/uploads/10039718</p><p>https://zenodo.org/uploads/10044242</p><p>https://zenodo.org/uploads/10047672</p><p>https://zenodo.org/uploads/10048102</p><p>https://zenodo.org/uploads/10048378</p><h2><a href="https://github.com/mpenn114/Sherlock#sherlock">Sherlock</a></h2><p>This repository contains a code package, Sherlock, which provides an easy-to-use tool for processing camera trapping images.</p><p>Its aim is to remove false positive images - that is, principally, images where the camera has been triggered by a very small disturbances, such as a plant blowing in the wind. If images containing a specific species is being targeted, it also allows for the user to easily customise parameters to set the colour of the deisred animals. This can help to filter out images containing other animals, and thus increase the accuracy of the code.</p><p>This code is aimed to be usable by someone with no prior coding experience. To help this, a guide to installing Python (the language which Sherlock is written in) is provided at the end of this readme.</p><h3><a href="https://github.com/mpenn114/Sherlock#-input-format-">Input Format</a></h3><p>This code can process JPG images (NB: it should be possible to edit the code to accept any image format). These images should be stored in folders/subfolders, with a single "master folder" containing all of these folders. Examples of acceptable folder structures are below:</p><p>[Master Folder] -> Images (that is, an image would have the path .../MasterFolder/0001.JPG)</p><p>[Master Folder] -> [Location Folders] -> Images (e.g. .../MasterFolder/Aberystwyth/0001.JPG)</p><p>[Master Folder] -> [Location Folders] -> [Sub-location Folders] -> Images (e.g. .../MasterFolder/Aberystwyth/Constitution Hill/0001.JPG)</p><p>[Master Folder] -> [Location Folders] -> [Sub-location Folders] > [Camera Number] -> Images (e.g. .../MasterFolder/Aberystwyth/Constitution Hill/Camera1/0001.JPG)</p><p>In all of these cases, it is simply necessary to specify the master folder. Note that it is also possible to have a mixture of these cases (e.g. some locations may not have sub-locations)</p><p>It is important that the JPG images in each image folder are named consecutively as 0001.JPG, 0002.JPG, ... (note, the number of leading zeros is not important).</p><h3><a href="https://github.com/mpenn114/Sherlock#-output-format-">Output Format</a></h3><p>The code can produce different kinds of outputs, depending on the needs of the user.</p><p>The primary output is, for each folder of images, a folder containing CSV files, labelled as "CSV_Outputs[Runcode]". These files are "Potential_Animals" (a list of all images that the code believes may contain animals); "Unlikely_Animals" (a list of all images that the code believes do not contain an animal); "Errors" (a list of images that could not be processed); "Close to Animals" (a list of images such that images close to them - in terms of image number and time - were identified as potential animals); "Overall Animals" (the combination of the lists in "Potential_Animals" and "Close_to_Animals") and "False_Negatives" (if the code is in "testing mode", explained below, a list of all the false negatives)</p><p>It is also possible to get the code to write any potential animal images into a new folder, called "PotentialAnimals[Runcode]" where any of the images that are identified as animals, along with those that are close to them, are re-written with red boxes indicating the locations in which animals were thought to be. Note that this does not edit the original images in any way. However, it can be turned off if desired (as these images will be reasonably large files)</p><p>Finally, the code can be put into "testing mode". This can be done by changing one of the parameters (explained at the start of the code file), and seeks to compare the results of the code with human-inputted results. A list of image numbers containing animals should be created, called "Animaldata.csv", and a list of image numbers not containing animals should be created, called "nonAnimaldata.csv". These should then be saved in the same folder as the images, and their inclusion will allow the code to create CSV outputs that compare the two sets of results. The code can also compare results according to a number of different characteristics of the image, such as time and location, provided that the format matches that of the example CSV which is included in this Github.</p><h3><a href="https://github.com/mpenn114/Sherlock#-installing-python-anaconda-and-jupyter-lab-">Installing Python, Anaconda and Jupyter Lab</a></h3><p>Anaconda (and thus, Python) can be installed by visiting:</p><p>Windows: <a href="https://docs.anaconda.com/anaconda/install/windows/">https://docs.anaconda.com/anaconda/install/windows/</a></p><p>Mac: <a href="https://docs.anaconda.com/anaconda/install/mac-os/">https://docs.anaconda.com/anaconda/install/mac-os/</a></p><p>Linux: <a href="https://docs.anaconda.com/anaconda/install/linux/">https://docs.anaconda.com/anaconda/install/linux/</a></p><p>Once Anaconda has been installed, you should be able to find "Anaconda Prompt", and open it to get a command window. Type</p><p>conda install -c conda-forge jupyterlab</p><p>into this window and press enter to install Jupyter Lab</p><h3><a href="https://github.com/mpenn114/Sherlock#-opening-jupyter-lab-">Opening Jupyter Lab</a></h3><p>To open Jupyter Lab, open Anaconda Prompt, type in</p><p>jupyter lab</p><p>and then press enter. It should open in your web browser (note: you do not need an Internet connection to do this, or to run any of this code, except the section immediately following)</p><h3><a href="https://github.com/mpenn114/Sherlock#-opening-sherlock-">Opening Sherlock</a></h3><p>You can copy the code for Sherlock onto your computer by opening a new notebook (by clicking the "Python 3" button below "Notebook" in the right hand window. Then, there should be a textbox with your cursor inside it. If you have Git installed on your computer (which you can install from here <a href="https://github.com/git-guides/install-git">https://github.com/git-guides/install-git</a>), you can copy the code for Sherlock by copying</p><p>!git clone <a href="https://github.com/mpenn114/Sherlock">https://github.com/mpenn114/Sherlock</a></p><p>into this textbox, and then pressing the run button (which is a button in the row of buttons above the textbox - it looks like a "play" button). You should then be able to see a folder called Sherlock on the left hand side of the screen. Double-click on this folder to open it, and then double-click on the file "Sherlock.ipynb" to open the notebook for Sherlock. This should then appear on the right hand window.</p><p>Otherwise, you can download the code from this repository as a zip file. After extracting this code (right click on the zip file and click "Extract All"), you then will need to copy the notebook "Sherlock.ipynb" to the folder that Python opens with when you start Jupyter Lab. On Windows, this will generally be "C:/Users/[Your username]".</p><p>This file then contains all the information needed to run the code at the top. The actual code is below this initial text and, once you are happy with the inputs and parameters, you can run it by pressing the clicking somewhere on it, and then pressing the "run" button.</p><p>Note: If you are using an old Mac operating system (iOS 13 or earlier) then you may have an error when running the code. This can be fixed by removing the line</p><p>!pip install opencv-python</p><p>from the code and replacing it with the two lines</p><p>!pip uninstall opencv-python --y</p><p>!pip install opencv-python==4.4.0.46</p><p>The code should then run without any errors.</p><p>If you are using a camera with an unusual metadata format, then you may need to use the Sherlock_legacy.ipynb file instead. This manually detects whether an image was taken during the day or at night, rather than using the image metadata. Do feel free to log this as an issue if the main version doesn't work and we will seek to add a fix!</p>
Voluntary saccadic eye movements in humans studied with a double-cue paradigm
AbstractIn the classic double-step paradigm, subjects are required to make a saccade to a visual target that is briefly presented at one location and then shifted to a new location before the subject has responded. The saccades in this situation are “reflexive” in that they are made in response to the appearance of the target itself. In the present experiments we adapted the double-step paradigm to study “voluntary” saccades. For this, several identical targets were always visible and subjects were given a cue to indicate that they should make a saccade to one of them. This cue was then changed to indicate another of the targets before the subject had responded: double-cue (DC) paradigm. The saccadic eye movements in our DC paradigm had many features in common with those in the double-step paradigm and we show that apparent differences can be attributed to the spatio-temporal arrangements of the cues/targets rather than to any intrinsic differences in the programming of these two kinds of eye movements. For example, a feature of our DC paradigm that is not seen in the usual double-step paradigm is that the second cue could cause transient delays of the initial saccade, and these delays still occurred when the second cue was reflexive––provided that it was at the fovea (as in our DC paradigm) and not in the periphery (as in the usual double-step paradigm). Thus, the critical factor for the delay was the retinal (foveal) location of the second cue/target––not whether it was cognitive or reflexive––and we argue that the second cue/target is here acting as a distractor. We conclude that the DC paradigm can be used to study the programming of voluntary saccades in the same way that the double-step paradigm can be used to study reflexive saccades
Hydrothermal vent fields and chemosynthetic biota on the world\u27s deepest seafloor spreading centre
The Mid-Cayman spreading centre is an ultraslow-spreading ridge in the Caribbean Sea. Its extreme depth and geographic isolation from other mid-ocean ridges offer insights into the effects of pressure on hydrothermal venting, and the biogeography of vent fauna. Here we report the discovery of two hydrothermal vent fields on the Mid-Cayman spreading centre. The Von Damm Vent Field is located on the upper slopes of an oceanic core complex at a depth of 2,300 m. High-temperature venting in this off-axis setting suggests that the global incidence of vent fields may be underestimated. At a depth of 4,960 m on the Mid-Cayman spreading centre axis, the Beebe Vent Field emits copper-enriched fluids and a buoyant plume that rises 1,100 m, consistent with &gt;400 degrees C venting from the world\u27s deepest known hydrothermal system. At both sites, a new morphospecies of alvinocaridid shrimp dominates faunal assemblages, which exhibit similarities to those of Mid-Atlantic vents