253 research outputs found

    The Application of Advanced Technologies for Agriculture and Rangeland Management

    Get PDF
    This project demonstrates two applications of remote sensing in agricultural and rangeland environments. In the first, an unmanned aerial system (UAS) equipped with a multi-spectral sensor was used to estimate canopy cover across four different cover crop trials at four time periods. In the second, a local database of stationary camera trap images of wildlife was used to train a convolutional neural network to automatically catalogue images by identifying the animal in those images. Both projects aimed to provide an example of how remote sensing platforms and machine learning techniques can facilitate the rapid collection and processing of large-scale field data. In both projects, methods were developed that confirm the utility of advanced remote sensing and computer vision technologies

    Counting using deep learning regression gives value to ecological surveys

    Get PDF
    Many ecological studies rely on count data and involve manual counting of objects of interest, which is time-consuming and especially disadvantageous when time in the field or lab is limited. However, an increasing number of works uses digital imagery, which opens opportunities to automatise counting tasks. In this study, we use machine learning to automate counting objects of interest without the need to label individual objects. By leveraging already existing image-level annotations, this approach can also give value to historical data that were collected and annotated over longer time series (typical for many ecological studies), without the aim of deep learning applications. We demonstrate deep learning regression on two fundamentally different counting tasks: (i) daily growth rings from microscopic images of fish otolith (i.e., hearing stone) and (ii) hauled out seals from highly variable aerial imagery. In the otolith images, our deep learning-based regressor yields an RMSE of 3.40 day-rings and an [Formula: see text] of 0.92. Initial performance in the seal images is lower (RMSE of 23.46 seals and [Formula: see text] of 0.72), which can be attributed to a lack of images with a high number of seals in the initial training set, compared to the test set. We then show how to improve performance substantially (RMSE of 19.03 seals and [Formula: see text] of 0.77) by carefully selecting and relabelling just 100 additional training images based on initial model prediction discrepancy. The regression-based approach used here returns accurate counts ([Formula: see text] of 0.92 and 0.77 for the rings and seals, respectively), directly usable in ecological research

    Perspectives in machine learning for wildlife conservation

    Get PDF
    Data acquisition in animal ecology is rapidly accelerating due to inexpensive and accessible sensors such as smartphones, drones, satellites, audio recorders and bio-logging devices. These new technologies and the data they generate hold great potential for large-scale environmental monitoring and understanding, but are limited by current data processing approaches which are inefficient in how they ingest, digest, and distill data into relevant information. We argue that machine learning, and especially deep learning approaches, can meet this analytic challenge to enhance our understanding, monitoring capacity, and conservation of wildlife species. Incorporating machine learning into ecological workflows could improve inputs for population and behavior models and eventually lead to integrated hybrid modeling tools, with ecological models acting as constraints for machine learning models and the latter providing data-supported insights. In essence, by combining new machine learning approaches with ecological domain knowledge, animal ecologists can capitalize on the abundance of data generated by modern sensor technologies in order to reliably estimate population abundances, study animal behavior and mitigate human/wildlife conflicts. To succeed, this approach will require close collaboration and cross-disciplinary education between the computer science and animal ecology communities in order to ensure the quality of machine learning approaches and train a new generation of data scientists in ecology and conservation

    Testing the ability of Unmanned Aerial Systems and machine learning to map weeds at subfield scales: a test with the weed Alopecurus myosuroides (Huds).

    Get PDF
    BACKGROUND: It is important to map agricultural weed populations in order to improve management and maintain future food security. Advances in data collection and statistical methodology have created new opportunities to aid in the mapping of weed populations. We set out to apply these new methodologies (Unmanned Aerial Systems - UAS) and statistical techniques (Convolutional Neural Networks - CNN) for the mapping of black-grass, a highly impactful weed in wheat fields in the UK. We tested this by undertaking an extensive UAS and field-based mapping over the course of two years, in total collecting multispectral image data from 102 fields, with 76 providing informative data. We used these data to construct a Vegetation Index (VI), that we used to train a custom CNN model from scratch. We undertook a suite of data engineering techniques, such as balancing and cleaning to optimize performance of our metrics. We also investigate the transferability of the models from one field to another. RESULTS: The results show that our data collection methodology and implementation of CNN outperform pervious approaches in the literature. We show that data engineering to account for "artefacts" in the image data increases our metrics significantly. We are not able to identify any traits that are shared between fields that result in high scores from our novel leave one field our cross validation (LOFO-CV) tests. CONCLUSION: We conclude that this evaluation procedure is a better estimation of real-world predictive value when compared to past studies. We conclude that by engineering the image data set into discrete classes of data quality we increase the prediction accuracy from the baseline model by 5% to an AUC of 0.825. We find that the temporal effects studied here have no effect on our ability to model weed densities

    JellyNet: The convolutional neural network jellyfish bloom detector

    Get PDF
    Coastal industries face disruption on a global scale due to the threat of large blooms of jellyfish. They can decimate coastal fisheries and clog the water intake systems of desalination and nuclear power plants. This can lead to losses of revenue and power output. This paper presents JellyNet: a convolutional neural network (CNN) jellyfish bloom detection model trained on high resolution remote sensing imagery collected by unmanned aerial vehicles (UAVs). JellyNet provides the detection capability for an early (6–8 h) bloom warning system. 1539 images were collected from flights at 2 locations: Croabh Haven, UK and Pruth Bay, Canada. The training/test dataset was manually labelled, and split into two classes: ‘Bloom present’ and ‘No bloom present’. 500 × 500 pixel images were used to increase fine-grained pattern detection of the jellyfish blooms. Model testing was completed using a 75/25% training/test split with hyperparameters selected prior to model training using a held-out validation dataset. Transfer learning using VGG-16 architecture, and a jellyfish bloom specific binary classifier surpassed an accuracy of 90%. Test model performance peaked at 97.5% accuracy. This paper exhibits the first example of a high resolution, multi-sensor jellyfish bloom detection capability, with integrated robustness from two oceans to tackle real world detection challenges

    A Review on Deep Learning in UAV Remote Sensing

    Full text link
    Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms' applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicles (UAV) based applications have dominated aerial sensing research. However, a literature revision that combines both "deep learning" and "UAV remote sensing" thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published material and evaluated their characteristics regarding application, sensor, and technique used. We relate how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. Our revision consists of a friendly-approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.Comment: 38 pages, 10 figure

    Integrating Technology Into Wildlife Surveys

    Get PDF
    Technology is rapidly improving and being incorporated into field biology, with survey methods such as machine learning and uncrewed aircraft systems (UAS) headlining efforts. UAS paired with machine learning algorithms have been used to detect caribou, nesting waterfowl and seabirds, marine mammals, white-tailed deer, and more in over 19 studies within the last decade alone. Simultaneously, UAS and machine learning have also been implemented for infrastructure monitoring at wind energy facilities as wind energy construction and use has skyrocketed globally. As part of both pre-construction and regulatory compliance of newly constructed wind energy facilities, monitoring of impacts to wildlife is assessed through ground surveys following the USFWS Land-based Wind Energy Guidelines. To streamline efforts at wind energy facilities and improve efficiency, safety, and accuracy in data collection, UAS platforms may be leveraged to not only monitor infrastructure, but also impacts to wildlife in the form of both pre- and post-construction surveys. In this study, we train, validate, and test a machine learning approach, a convolutional neural network (CNN), in the detection and classification of bird and bat carcasses. Further, we compare the trained CNN to the currently accepted and widely used method of human ground surveyors in a simulated post-construction monitoring scenario. Last, we establish a baseline comparison of manual image review of waterfowl pair surveys with currently used ground surveyors that could inform both pre-construction efforts at energy facilities, along with long-standing federal and state breeding waterfowl surveys. For the initial training of the CNN, we collected 1,807 images of bird and bat carcasses that were split into 80.0% training and 20.0% validation image sets. Overall detection was extremely high at 98.7%. We further explored the dataset by evaluating the trained CNN’s ability to identify species and the variables that impacted identification. Classification of species was successful in 90.5% of images and was associated with sun angle and wind speed. Next, we performed a proof of concept to determine the utility of the trained CNN against ground surveyors in ground covers and with species that were both used in the initial training of the model and novel. Ground surveyors performed similar to those surveying at wind energy facilities with 63.2% detection, while the trained CNN fell short at 28.9%. Ground surveyor detection was weakly associated with carcass density within a plot and strongly with carcass size. Similarly, detection by the CNN was associated with carcass size, ground cover type, visual obstruction of vegetation, and weakly with carcass density within a plot. Finally, we examined differences in breeding waterfowl counts between ground surveyors and UAS image reviewers and found that manual review of UAS imagery yielded similar to slightly higher counts of waterfowl. Significant training, testing, and repeated validation of novel image data sets should be performed prior to implementing survey methods reliant upon machine learning algorithms. Additionally, further research is needed to determine potential biases of counting live waterfowl in aerial imagery, such as bird movement and double counting. While our initial results show that UAS imagery and machine learning can improve upon current techniques, extensive follow-up is strongly recommended in the form of proof-of-concept studies and additional validation to confirm the utility of the application in new environments with new species that allow models to be generalized. Remotely sensed imagery paired with machine learning algorithms have the potential to expedite and standardize monitoring of wildlife at wind energy facilities and beyond, improving data streams and potentially reducing costs for the benefit of both conservation agencies and the energy industry

    Dashboard for collecting and depicting the marine megafauna presence

    Get PDF
    While more and more technologies and software are being created and applied for the ocean setting, most of them still remain at high cost, and hinder the data to wider public. Understanding the marine biodiversity can be achieved through numerous ways, however, there is a lack of consensus and operability when depicting the marine megafauna population. Moreover, Deep Learning (DL) techniques are becoming accessible to wider population, and there is a potential of exposing them to the marine biologists, involving them to participate in public web-based dashboards, depicting those data. This dissertation addresses such issues, by providing an interactive dashboard, capable of fa cilitating the classification, prediction and deeper analysis of marine species. Using the State of Art (SoA) Machine Learning (ML) techniques for image vision, and providing the interactive vi sualizations, this thesis seeks to provide a less cumbersome apparatus for marine biologists, who can participate further in data gathering, labelling, depicting, ecological modelling, and potential calls for action. In further, this dissertation document provides the aquatic dashboard functionality using Human-Computer Interaction (HCI) techniques and interactive means to ease the upload, clas sification, and visualization of collected marine taxa, with a case study on marine megafauna imagery (e.g. whales, dolphins, sea birds, seals and turtles). As it will be hereinafter described, marine biologists, as end users, will evaluate of the proposed dashboard.Todos os dias surgem novas tecnologias e softwares que podem ser aplicados no ecossistema marinho, sendo que a maioria destas permanecem com um custo elevado, dificultando assim o acesso ao público em geral. O conhecimento deste sistema e de toda a biodiversidade nele existente, pode ser alcançado de diversas formas, no entanto, existe uma falta de consenso e operacionalidade ao descrever a população de megafauna. Além disto, técnicas de aprendizagem automática como o deep learning, permanecem acessíveis a uma população mais ampla, e existe o potential do envolvimento de profissionais da área, mais conhecidos como biólogos marinhos, para participar na criação e usabilidade de plataformas conhecidas como dashboards. Esta tese tem como função debater estas questões, fornecendo um dashboard interativo, capaz de facilitar a classificação, previsão e análise mais profunda das espécies marinhas. Usando técnicas de aprendizagem automática de última geração, para informação visual em imagens, e fornecer interfaces visuais muito interativas, esta tese procura fornecer uma ferramenta simples para os biólogos marinhos, podendo assim participar na recolha de dados, rotulagem, modelação ecológica e possíveis pedidos de alerta. A dissertação produzirá um dashboard funcional, utilizando técnicas de Interação Humano Computador (HCI) e meios interativos para facilitar o carregamento de dados, a classificação e visualização da fauna marinha coletada (p.ex. baleia, golfinho, ave marinha, foca e tartaruga). Como será descrito durante este manuscrito, biólogos marinhos, como utilizadores finais, irão participar na avaliação deste proposto dashboard
    • …
    corecore