147 research outputs found

    An Analysis of Altitude, Citizen Science and a Convolutional Neural Network Feedback Loop on Object Detection in Unmanned Aerial Systems

    Get PDF
    Using automated processes to detect wildlife in uncontrolled outdoor imagery in the field of wildlife ecology is a challenging task. In imagery provided by Unmanned Aerial Systems (UAS), this is especially true where individuals are small and visually similar to background substrates. To address these challenges, this work presents an automated feedback loop which can operate on large scale imagery, such as UAS generated orthomosaics, to train convo- lutional neural networks (CNNs) with extremely unbalanced class sizes. This feedback loop was used to help train CNNs using imagery classified by both expert biologists and citizen scientists at the Wildlife@home project. Utilizing the feedback loop dramatically reduced population count error rates from previously published work: from +150% to -3.93% on citizen scientist training data and +88% to +5.24% on expert training data. The system developed was then utilized to investigate the effect of altitude on CNN predictions. The training dataset was split into three subsets depending on the altitude of the imagery (75m, 100m and 120m). While the lowest altitude was shown to provide the best predictions of the three (+11.46%), the aggregate data set still provided the best results (-3.93%) indicating that there is greater benefit to be gained from a large data set at this scale, and there is potential benefit to having training data from multiple altitudes. This article is an extended version of “Detecting Wildlife in Unmanned Aerial Systems Imagery using Convolutional Neural Networks Trained with an Automated Feedback Loop” published in the proceedings of the 18th International Conference of Computational Science (ICCS 2018)

    Using Citizen Scientists To Inform Machine Learning Algorithms To Automate The Detection Of Species In Ecological Imagery

    Get PDF
    Modern data collection techniques used by ecologists has created a deluge of data that is becoming increasingly difficult to store, filter, and analyze in an efficient and timely manner. In just two summers, over 65,000 unmanned aerial system (UAS) images were collected, comprising the several terabytes (TB) of data that was reviewed by citizen scientists to generate inputs for machine learning algorithms. Uncontrolled conditions and the small size of target species relative to the background further increase the difficulty of manually cataloging the images. To assist with locating and identifying snow geese in the UAS images, a citizen science web portal was created as part of Wildlife@Home. It is demonstrated that aggregate citizen scientist observations are similar in quality to observations made by trained experts and can be used to train convolutional neural networks (CNN) to automate the detection of species in the imagery. Using a dataset comprising of the aggregate observations produces consistently better results than datasets consisting of observations from a single altitude, indicating that more numerous but slightly variable observations is preferable to more consistent but less numerous observations. The framework developed requires system administrators to manually run scripts to populate the database with new images; however, this framework can be extended to allow researchers to create their own projects, upload new images, and download data for CNN training

    Training Convolutional Neural Networks Using An Automated Feedback Loop To Estimate The Population Of Avian Species

    Get PDF
    Using automated processes to detect wildlife in uncontrolled outdoor imagery in the field of wildlife ecology is challenging task. This is especially true in imagery provided by an Unmanned Aerial System (UAS), where the relative size of wildlife is small and visually similar to its background. In the UAS imagery collected by the Wildlife@Home project, the data is also extremely unbalanced, with less than 1% of area in the imagery being of wildlife. To tackle these challenges, the Wildlife@Home project has employed citizen scientists and trained experts to go through collected UAS imagery and classify it. Classified data are used as inputs to convolutional neural networks (CNNs) which seek to automatically mark which areas of the imagery contain wildlife. The output of the CNN is then passed to a blob counter which returns a population estimate for the image. A feedback loop was developed to help train the CNNs to better differentiate between the wildlife and the the visually similar background and deal with the disparate amount of wildlife training images versus background training images. When using the feedback loop and citizen scientist provided data, population estimates by the CNN and blob counter are within 3.93% of the manual count by the field biologists. When expert provided data is used the estimates are within 5.24%. This is improved from 150% and 88% error in previous work which did not employ a feedback loop for the citizen science and expert data, respectively. Citizen scientist data worked better than expert data in the current work potentially because a matching algorithm was used on the citizen scientist data but not the expert data

    Fusion of visible and thermal images improves automated detection and classification of animals for drone surveys

    Get PDF
    Visible and thermal images acquired from drones (unoccupied aircraft systems) have substantially improved animal monitoring. Combining complementary information from both image types provides a powerful approach for automating detection and classification of multiple animal species to augment drone surveys. We compared eight image fusion methods using thermal and visible drone images combined with two supervised deep learning models, to evaluate the detection and classification of white-tailed deer (Odocoileus virginianus), domestic cow (Bos taurus), and domestic horse (Equus caballus). We classified visible and thermal images separately and compared them with the results of image fusion. Fused images provided minimal improvement for cows and horses compared to visible images alone, likely because the size, shape, and color of these species made them conspicuous against the background. For white-tailed deer, which were typically cryptic against their backgrounds and often in shadows in visible images, the added information from thermal images improved detection and classification in fusion methods from 15 to 85%. Our results suggest that image fusion is ideal for surveying animals inconspicuous from their backgrounds, and our approach uses few image pairs to train compared to typical machine-learning methods. We discuss computational and field considerations to improve drone surveys using our fusion approach. Supplemental files attached below

    A review of deep learning techniques for detecting animals in aerial and satellite images

    Get PDF
    Deep learning is an effective machine learning method that in recent years has been successfully applied to detect and monitor species population in remotely sensed data. This study aims to provide a systematic literature review of current applications of deep learning methods for animal detection in aerial and satellite images. We categorized methods in collated publications into image level, point level, bounding-box level, instance segmentation level, and specific information level. The statistical results show that YOLO, Faster R-CNN, U-Net and ResNet are the most used neural network structures. The main challenges associated with the use of these deep learning methods are imbalanced datasets, small samples, small objects, image annotation methods, image background, animal counting, model accuracy assessment, and uncertainty estimation. We explored possible solutions include the selection of sample annotation methods, optimizing positive or negative samples, using weakly and self- supervised learning methods, selecting or developing more suitable network structures. Future research trends we identified are video-based detection, very high-resolution satellite image-based detection, multiple species detection, new annotation methods, and the development of specialized network structures and large foundation models. We discussed existing research attempts as well as personal perspectives on these possible solutions and future trends

    Behavioral Responses Of Breeding Ducks To Unmanned Aerial Vehicle Surveys And Best Practices For Breeding Waterfowl Surveys Using Unmanned Aerial Vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs) have become a popular wildlife survey tool. As such, biologists are exploring the use of UAVs for surveying waterfowl. The most cited benefit of using UAVs over traditional methods is the idea of reduced disturbance, but this has had limited formal evaluation across species. We conducted UAV surveys with associated behavioral observations of ducks on wetlands and on nests during the 2019 – 2020 breeding seasons. We found species-specific behaviors among blue-winged teal (Spatula discors), northern shoveler (Spatula clypeata), and gadwall (Mareca strepera) including ducks noticing the aircraft, but reactions were generally less than traditional ground approaches suggesting that as technology increases efficiencies, UAVs may serve as an alternative tool for surveying breeding ducks

    Innovations in Camera Trapping Technology and Approaches: The Integration of Citizen Science and Artificial Intelligence

    Get PDF
    Camera trapping has become an increasingly reliable and mainstream tool for surveying a diversity of wildlife species. Concurrent with this has been an increasing effort to involve the wider public in the research process, in an approach known as ‘citizen science’. To date, millions of people have contributed to research across a wide variety of disciplines as a result. Although their value for public engagement was recognised early on, camera traps were initially ill‐suited for citizen science. As camera trap technology has evolved, cameras have become more user‐friendly and the enormous quantities of data they now collect has led researchers to seek assistance in classifying footage. This has now made camera trap research a prime candidate for citizen science, as reflected by the large number of camera trap projects now integrating public participation. Researchers are also turning to Artificial Intelligence (AI) to assist with classification of footage. Although this rapidly‐advancing field is already proving a useful tool, accuracy is variable and AI does not provide the social and engagement benefits associated with citizen science approaches. We propose, as a solution, more efforts to combine citizen science with AI to improve classification accuracy and efficiency while maintaining public involvement

    Feature Papers of Drones - Volume II

    Get PDF
    [EN] The present book is divided into two volumes (Volume I: articles 1–23, and Volume II: articles 24–54) which compile the articles and communications submitted to the Topical Collection ”Feature Papers of Drones” during the years 2020 to 2022 describing novel or new cutting-edge designs, developments, and/or applications of unmanned vehicles (drones). Articles 24–41 are focused on drone applications, but emphasize two types: firstly, those related to agriculture and forestry (articles 24–35) where the number of applications of drones dominates all other possible applications. These articles review the latest research and future directions for precision agriculture, vegetation monitoring, change monitoring, forestry management, and forest fires. Secondly, articles 36–41 addresses the water and marine application of drones for ecological and conservation-related applications with emphasis on the monitoring of water resources and habitat monitoring. Finally, articles 42–54 looks at just a few of the huge variety of potential applications of civil drones from different points of view, including the following: the social acceptance of drone operations in urban areas or their influential factors; 3D reconstruction applications; sensor technologies to either improve the performance of existing applications or to open up new working areas; and machine and deep learning development

    Using object-based image analysis to detect laughing gull nests

    Get PDF
    Remote sensing has long been used to study wildlife; however, manual methods of detecting wildlife in aerial imagery are often time-consuming and prone to human error, and newer computer vision techniques have not yet been extensively applied to wildlife surveys. We used the object-based image analysis (OBIA) software eCognition to detect laughing gull (Leucophaeus atricilla) nests in Jamaica Bay as part of an ongoing monitoring effort at the John F. Kennedy International Airport. Our technique uses a combination of high resolution 4-band aerial imagery captured via manned aircraft with a multispectral UltraCam Falcon M2 camera, LiDAR point cloud data, and land cover data derived from a bathymetric LiDAR point cloud to classify and extract laughing gull nests. Our ruleset uses the site (topographic position of nest objects), tone (spectral characteristic of nest objects), shape, size, and association (nearby objects commonly found with the objects of interest that help identify them) elements of image interpretation, as well as NDVI and a sublevel object examination to classify and extract nests. The ruleset achieves a producer’s accuracy of 98% as well as a user’s accuracy of 65% and a kappa of 0.696, indicating that it extracts a majority of the nests in the imagery while reducing errors of commission to only 35% of the final results. The remaining errors of commission are difficult for the software to differentiate without also impacting the number of nests successfully extracted and are best addressed by a manual verification of output results as part of a semi-automated workflow in which the OBIA is used to complete the initial search of the imagery and the results are then systematically verified by the user to remove errors. This eliminates the need to manually search entire sets of imagery for nests, resulting in a much more efficient and less error prone methodology than previous unassisted image interpretation techniques. Because of the extensibility of OBIA software and the increasing availability of imagery due to small unmanned aircraft systems (sUAS), our methodology and its benefits have great potential for adaptation to other species surveyed using aerial imagery to enhance wildlife population monitoring

    Partnering People with Deep Learning Systems: Human Cognitive Effects of Explanations

    Get PDF
    Advances in “deep learning” algorithms have led to intelligent systems that provide automated classifications of unstructured data. Until recently these systems could not provide the reasons behind a classification. This lack of “explainability” has led to resistance in applying these systems in some contexts. An intensive research and development effort to make such systems more transparent and interpretable has proposed and developed multiple types of explanation to address this challenge. Relatively little research has been conducted into how humans process these explanations. Theories and measures from areas of research in social cognition were selected to evaluate attribution of mental processes from intentional systems theory, measures of working memory demands from cognitive load theory, and self-efficacy from social cognition theory. Crowdsourced natural disaster damage assessment of aerial images was employed using a written assessment guideline as the task. The “Wizard of Oz” method was used to generate the damage assessment output of a simulated agent. The output and explanations contained errors consistent with transferring a deep learning system to a new disaster event. A between-subjects experiment was conducted where three types of natural language explanations were manipulated between conditions. Counterfactual explanations increased intrinsic cognitive load and made participants more aware of the challenges of the task. Explanations that described boundary conditions and failure modes (“hedging explanations”) decreased agreement with erroneous agent ratings without a detectable effect on cognitive load. However, these effects were not large enough to counteract decreases in self-efficacy and increases in erroneous agreement as a result of providing a causal explanation. The extraneous cognitive load generated by explanations had the strongest influence on self-efficacy in the task. Presenting all of the explanation types at the same time maximized cognitive load and agreement with erroneous simulated output. Perceived interdependence with the simulated agent was also associated with increases in self-efficacy; however, trust in the agent was not associated with differences in self-efficacy. These findings identify effects related to research areas which have developed methods to design tasks that may increase the effectiveness of explanations
    • 

    corecore