44 research outputs found

    Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2

    Get PDF
    Motion-activated wildlife cameras (or “camera traps”) are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty-animal model.” Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%–91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91%–94% on out-of-sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths

    Case Reports1. A Late Presentation of Loeys-Dietz Syndrome: Beware of TGFβ Receptor Mutations in Benign Joint Hypermobility

    Get PDF
    Background: Thoracic aortic aneurysms (TAA) and dissections are not uncommon causes of sudden death in young adults. Loeys-Dietz syndrome (LDS) is a rare, recently described, autosomal dominant, connective tissue disease characterized by aggressive arterial aneurysms, resulting from mutations in the transforming growth factor beta (TGFβ) receptor genes TGFBR1 and TGFBR2. Mean age at death is 26.1 years, most often due to aortic dissection. We report an unusually late presentation of LDS, diagnosed following elective surgery in a female with a long history of joint hypermobility. Methods: A 51-year-old Caucasian lady complained of chest pain and headache following a dural leak from spinal anaesthesia for an elective ankle arthroscopy. CT scan and echocardiography demonstrated a dilated aortic root and significant aortic regurgitation. MRA demonstrated aortic tortuosity, an infrarenal aortic aneurysm and aneurysms in the left renal and right internal mammary arteries. She underwent aortic root repair and aortic valve replacement. She had a background of long-standing joint pains secondary to hypermobility, easy bruising, unusual fracture susceptibility and mild bronchiectasis. She had one healthy child age 32, after which she suffered a uterine prolapse. Examination revealed mild Marfanoid features. Uvula, skin and ophthalmological examination was normal. Results: Fibrillin-1 testing for Marfan syndrome (MFS) was negative. Detection of a c.1270G > C (p.Gly424Arg) TGFBR2 mutation confirmed the diagnosis of LDS. Losartan was started for vascular protection. Conclusions: LDS is a severe inherited vasculopathy that usually presents in childhood. It is characterized by aortic root dilatation and ascending aneurysms. There is a higher risk of aortic dissection compared with MFS. Clinical features overlap with MFS and Ehlers Danlos syndrome Type IV, but differentiating dysmorphogenic features include ocular hypertelorism, bifid uvula and cleft palate. Echocardiography and MRA or CT scanning from head to pelvis is recommended to establish the extent of vascular involvement. Management involves early surgical intervention, including early valve-sparing aortic root replacement, genetic counselling and close monitoring in pregnancy. Despite being caused by loss of function mutations in either TGFβ receptor, paradoxical activation of TGFβ signalling is seen, suggesting that TGFβ antagonism may confer disease modifying effects similar to those observed in MFS. TGFβ antagonism can be achieved with angiotensin antagonists, such as Losartan, which is able to delay aortic aneurysm development in preclinical models and in patients with MFS. Our case emphasizes the importance of timely recognition of vasculopathy syndromes in patients with hypermobility and the need for early surgical intervention. It also highlights their heterogeneity and the potential for late presentation. Disclosures: The authors have declared no conflicts of interes

    Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2

    Get PDF
    Motion-activated wildlife cameras (or “camera traps”) are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter-out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3 million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty-animal model.” Our species model and empty-animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out-of-sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%–91% across all out-of-sample datasets) and the empty-animal model achieved an accuracy of 91%–94% on out-of-sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty-animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths

    Machine learning to classify animal species in camera trap images: Applications in ecology

    Get PDF
    1. Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. 2. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model. 3. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. 4. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists
    corecore