269 research outputs found

    Model-informed classification of broadband acoustic backscatter from zooplankton in an in situ mesocosm

    Get PDF
    Funding: The fieldwork was registered in the Research in Svalbard database (RiS ID 11578). Fieldwork and research were financed by Arctic Field Grant Project AZKABAN-light (Norwegian Research Council project no. 322 332), Deep Impact (Norwegian Research Council project no. 300 333), Deeper Impact (Norwegian Research Council project no. 329 305), Marine Alliance for Science and Technology in Scotland (MASTS), the Ocean Frontier Institute (SCORE grant no. HR09011), and Glider Phase II financed by ConocoPhillips Skandinavia AS. Geir Pedersen’s participation was co-funded by CRIMAC (Norwegian Research Council project no. 309 512). Maxime Geoffroy was financially supported by the Ocean Frontier Institute of the Canada First Research Excellence Fund, the Natural Sciences and Engineering Research Council Discovery Grant Programme, the ArcticNet Network of Centres of Excellence Canada, the Research Council of Norway Grant Deep Impact, and the Fisheries and Oceans Canada through the Atlantic Fisheries Fund.Classification of zooplankton to species with broadband echosounder data could increase the taxonomic resolution of acoustic surveys and reduce the dependence on net and trawl samples for ‘ground truthing’. Supervised classification with broadband echosounder data is limited by the acquisition of validated data required to train machine learning algorithms (‘classifiers’). We tested the hypothesis that acoustic scattering models could be used to train classifiers for remote classification of zooplankton. Three classifiers were trained with data from scattering models of four Arctic zooplankton groups (copepods, euphausiids, chaetognaths, and hydrozoans). We evaluated classifier predictions against observations of a mixed zooplankton community in a submerged purpose-built mesocosm (12 m3) insonified with broadband transmissions (185–255 kHz). The mesocosm was deployed from a wharf in Ny-Ålesund, Svalbard, during the Arctic polar night in January 2022. We detected 7722 tracked single targets, which were used to evaluate the classifier predictions of measured zooplankton targets. The classifiers could differentiate copepods from the other groups reasonably well, but they could not differentiate euphausiids, chaetognaths, and hydrozoans reliably due to the similarities in their modelled target spectra. We recommend that model-informed classification of zooplankton from broadband acoustic signals be used with caution until a better understanding of in situ target spectra variability is gained.Publisher PDFPeer reviewe

    Development of benthic monitoring approaches for salmon aquaculture sites using machine learning, hydroacoustic data and bacterial eDNA

    Get PDF
    Intensive caged salmon production can lead to localized perturbations of the seafloor environment where organic waste (flocculent matter) accumulates and disrupts ecological processes. As the aquaculture industry expands, the development of tools to rapidly detect changes in seafloor condition is critical. Here, we examine whether applying machine learning to two types of monitoring data could improve environmental assessments at aquaculture sites in Newfoundland. First, we apply machine learning to single beam echosounder data to detect flocculent matter at aquaculture sites over larger areas than currently achieved used drop camera imaging. Then, we use machine learning to categorize sediments by levels of disturbance based on bacterial tetranucleotide frequency distributions generated from environmental DNA. While echosounder data can detect flocculent matter with moderate success in this region, bacterial tetranucleotide frequencies are highly effective classifiers of benthic disturbance; this simplified environmental DNA-based approach could be implemented within novel aquaculture benthic monitoring pipelines

    Deep learning with self-supervision and uncertainty regularization to count fish in underwater images

    Full text link
    Effective conservation actions require effective population monitoring. However, accurately counting animals in the wild to inform conservation decision-making is difficult. Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive but created a need to process and analyse this data efficiently. Counting animals from such data is challenging, particularly when densely packed in noisy images. Attempting this manually is slow and expensive, while traditional computer vision methods are limited in their generalisability. Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals. To this end, we employ deep learning, with a density-based regression approach, to count fish in low-resolution sonar images. We introduce a large dataset of sonar videos, deployed to record wild Lebranche mullet schools (Mugil liza), with a subset of 500 labelled images. We utilise abundant unlabelled data in a self-supervised task to improve the supervised counting task. For the first time in this context, by introducing uncertainty quantification, we improve model training and provide an accompanying measure of prediction uncertainty for more informed biological decision-making. Finally, we demonstrate the generalisability of our proposed counting framework through testing it on a recent benchmark dataset of high-resolution annotated underwater images from varying habitats (DeepFish). From experiments on both contrasting datasets, we demonstrate our network outperforms the few other deep learning models implemented for solving this task. By providing an open-source framework along with training data, our study puts forth an efficient deep learning template for crowd counting aquatic animals thereby contributing effective methods to assess natural populations from the ever-increasing visual data

    The assessment and development of methods in (spatial) sound ecology

    Get PDF
    As vital ecosystems across the globe enter unchartered pressure from climate change industrial land use, understanding the processes driving ecosystem viability has never been more critical. Nuanced ecosystem understanding comes from well-collected field data and a wealth of associated interpretations. In recent years the most popular methods of ecosystem monitoring have revolutionised from often damaging and labour-intensive manual data collection to automated methods of data collection and analysis. Sound ecology describes the school of research that uses information transmitted through sound to infer properties about an area's species, biodiversity, and health. In this thesis, we explore and develop state-of-the-art automated monitoring with sound, specifically relating to data storage practice and spatial acoustic recording and data analysis. In the first chapter, we explore the necessity and methods of ecosystem monitoring, focusing on acoustic monitoring, later exploring how and why sound is recorded and the current state-of-the-art in acoustic monitoring. Chapter one concludes with us setting out the aims and overall content of the following chapters. We begin the second chapter by exploring methods used to mitigate data storage expense, a widespread issue as automated methods quickly amass vast amounts of data which can be expensive and impractical to manage. Importantly I explain how these data management practices are often used without known consequence, something I then address. Specifically, I present evidence that the most used data reduction methods (namely compression and temporal subsetting) have a surprisingly small impact on the information content of recorded sound compared to the method of analysis. This work also adds to the increasing evidence that deep learning-based methods of environmental sound quantification are more powerful and robust to experimental variation than more traditional acoustic indices. In the latter chapters, I focus on using multichannel acoustic recording for sound-source localisation. Knowing where a sound originated has a range of ecological uses, including counting individuals, locating threats, and monitoring habitat use. While an exciting application of acoustic technology, spatial acoustics has had minimal uptake owing to the expense, impracticality and inaccessibility of equipment. In my third chapter, I introduce MAARU (Multichannel Acoustic Autonomous Recording Unit), a low-cost, easy-to-use and accessible solution to this problem. I explain the software and hardware necessary for spatial recording and show how MAARU can be used to localise the direction of a sound to within ±10˚ accurately. In the fourth chapter, I explore how MAARU devices deployed in the field can be used for enhanced ecosystem monitoring by spatially clustering individuals by calling directions for more accurate abundance approximations and crude species-specific habitat usage monitoring. Most literature on spatial acoustics cites the need for many accurately synced recording devices over an area. This chapter provides the first evidence of advances made with just one recorder. Finally, I conclude this thesis by restating my aims and discussing my success in achieving them. Specifically, in the thesis’ conclusion, I reiterate the contributions made to the field as a direct result of this work and outline some possible development avenues.Open Acces

    Semi-supervised segmentation for coastal monitoring seagrass using RPA imagery

    Get PDF
    Intertidal seagrass plays a vital role in estimating the overall health and dynamics of coastal environments due to its interaction with tidal changes. However, most seagrass habitats around the globe have been in steady decline due to human impacts, disturbing the already delicate balance in the environmental conditions that sustain seagrass. Miniaturization of multi-spectral sensors has facilitated very high resolution mapping of seagrass meadows, which significantly improves the potential for ecologists to monitor changes. In this study, two analytical approaches used for classifying intertidal seagrass habitats are compared—Object-based Image Analysis (OBIA) and Fully Convolutional Neural Networks (FCNNs). Both methods produce pixel-wise classifications in order to create segmented maps. FCNNs are an emerging set of algorithms within Deep Learning. Conversely, OBIA has been a prominent solution within this field, with many studies leveraging in-situ data and multiresolution segmentation to create habitat maps. This work demonstrates the utility of FCNNs in a semi-supervised setting to map seagrass and other coastal features from an optical drone survey conducted at Budle Bay, Northumberland, England. Semi-supervision is also an emerging field within Deep Learning that has practical benefits of achieving state of the art results using only subsets of labelled data. This is especially beneficial for remote sensing applications where in-situ data is an expensive commodity. For our results, we show that FCNNs have comparable performance with the standard OBIA method used by ecologists

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    Machine Learning in Image Analysis and Pattern Recognition

    Get PDF
    This book is to chart the progress in applying machine learning, including deep learning, to a broad range of image analysis and pattern recognition problems and applications. In this book, we have assembled original research articles making unique contributions to the theory, methodology and applications of machine learning in image analysis and pattern recognition

    Seabed biodiversity on the continental shelf of the Great Barrier Reef World Heritage Area

    Get PDF
    Final report to the Cooperative Research Centre for the Great Barrier Reef World Heritage Are

    Autonomous Exploration of Large-Scale Natural Environments

    Get PDF
    This thesis addresses issues which arise when using robotic platforms to explore large-scale, natural environments. Two main problems are identified: the volume of data collected by autonomous platforms and the complexity of planning surveys in large environments. Autonomous platforms are able to rapidly accumulate large data sets. The volume of data that must be processed is often too large for human experts to analyse exhaustively in a practical amount of time or in a cost-effective manner. This burden can create a bottleneck in the process of converting observations into scientifically relevant data. Although autonomous platforms can collect precisely navigated, high-resolution data, they are typically limited by finite battery capacities, data storage and computational resources. Deployments are also limited by project budgets and time frames. These constraints make it impractical to sample large environments exhaustively. To use the limited resources effectively, trajectories which maximise the amount of information gathered from the environment must be designed. This thesis addresses these problems. Three primary contributions are presented: a new classifier designed to accept probabilistic training targets rather than discrete training targets; a semi-autonomous pipeline for creating models of the environment; and an offline method for autonomously planning surveys. These contributions allow large data sets to be processed with minimal human intervention and promote efficient allocation of resources. In this thesis environmental models are established by learning the correlation between data extracted from a digital elevation model (DEM) of the seafloor and habitat categories derived from in-situ images. The DEM of the seafloor is collected using ship-borne multibeam sonar and the in-situ images are collected using an autonomous underwater vehicle (AUV). While the thesis specifically focuses on mapping and exploring marine habitats with an AUV, the research applies equally to other applications such as aerial and terrestrial environmental monitoring and planetary exploration

    Investigating summer thermal stratification in Lake Ontario

    Get PDF
    Summer thermal stratification in Lake Ontario is simulated using the 3D hydrodynamic model Environmental Fluid Dynamics Code (EFDC). Summer temperature differences establish strong vertical density gradients (thermocline) between the epilimnion and hypolimnion. Capturing the stratification and thermocline formation has been a challenge in modeling Great Lakes. Deviating from EFDC's original Mellor-Yamada (1982) vertical mixing scheme, we have implemented an unidimensional vertical model that uses different eddy diffusivity formulations above and below the thermocline (Vincon-Leite, 1991; Vincon-Leite et al., 2014). The model is forced with the hourly meteorological data from weather stations around the lake, flow data for Niagara and St. Lawrence rivers; and lake bathymetry is interpolated on a 2-km grid. The model has 20 vertical layers following sigma vertical coordinates. Sensitivity of the model to vertical layers' spacing is thoroughly investigated. The model has been calibrated for appropriate solar radiation coefficients and horizontal mixing coefficients. Overall the new implemented diffusivity algorithm shows some successes in capturing the thermal stratification with RMSE values between 2-3°C. Calibration of vertical mixing coefficients is under investigation to capture the improved thermal stratification
    • …
    corecore