2,246 research outputs found

    The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms

    Get PDF
    Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version

    A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery

    Get PDF
    With the evolution of the convolutional neural network (CNN), object detection in the underwater environment has gained a lot of attention. However, due to the complex nature of the underwater environment, generic CNN-based object detectors still face challenges in underwater object detection. These challenges include image blurring, texture distortion, color shift, and scale variation, which result in low precision and recall rates. To tackle this challenge, we propose a detection refinement algorithm based on spatial–temporal analysis to improve the performance of generic detectors by suppressing the false positives and recovering the missed detections in underwater videos. In the proposed work, we use state-of-the-art deep neural networks such as Inception, ResNet50, and ResNet101 to automatically classify and detect the Norway lobster Nephrops norvegicus burrows from underwater videos. Nephrops is one of the most important commercial species in Northeast Atlantic waters, and it lives in burrow systems that it builds itself on muddy bottoms. To evaluate the performance of proposed framework, we collected the data from the Gulf of Cadiz. From experiment results, we demonstrate that the proposed framework effectively suppresses false positives and recovers missed detections obtained from generic detectors. The mean average precision (mAP) gained a 10% increase with the proposed refinement technique.Versión del edito

    Listening forward: approaching marine biodiversity assessments using acoustic methods

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Mooney, T. A., Di Iorio, L., Lammers, M., Lin, T., Nedelec, S. L., Parsons, M., Radford, C., Urban, E., & Stanley, J. Listening forward: approaching marine biodiversity assessments using acoustic methods. Royal Society Open Science, 7(8), (2020): 201287, doi:10.1098/rsos.201287.Ecosystems and the communities they support are changing at alarmingly rapid rates. Tracking species diversity is vital to managing these stressed habitats. Yet, quantifying and monitoring biodiversity is often challenging, especially in ocean habitats. Given that many animals make sounds, these cues travel efficiently under water, and emerging technologies are increasingly cost-effective, passive acoustics (a long-standing ocean observation method) is now a potential means of quantifying and monitoring marine biodiversity. Properly applying acoustics for biodiversity assessments is vital. Our goal here is to provide a timely consideration of emerging methods using passive acoustics to measure marine biodiversity. We provide a summary of the brief history of using passive acoustics to assess marine biodiversity and community structure, a critical assessment of the challenges faced, and outline recommended practices and considerations for acoustic biodiversity measurements. We focused on temperate and tropical seas, where much of the acoustic biodiversity work has been conducted. Overall, we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. Key needs are preliminary data and sampling sufficiently to capture the patterns and variability of a habitat. Yet with new analytical tools including source separation and supervised machine learning, there is substantial promise in marine acoustic diversity assessment methods.Funding for development of this article was provided by the collaboration of the Urban Coast Institute (Monmouth University, NJ, USA), the Program for the Human Environment (The Rockefeller University, New York, USA) and the Scientific Committee on Oceanic Research. Partial support was provided to T.A.M. from the National Science Foundation grant OCE-1536782

    Sounding the call for a global library of underwater biological sounds

    Get PDF
    © The Author(s), 2022. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Parsons, M., Lin, T.-H., Mooney, T., Erbe, C., Juanes, F., Lammers, M., Li, S., Linke, S., Looby, A., Nedelec, S., Van Opzeeland, I., Radford, C., Rice, A., Sayigh, L., Stanley, J., Urban, E., & Di Iorio, L. Sounding the call for a global library of underwater biological sounds. Frontiers in Ecology and Evolution, 10, (2022): 810156, https://doi.org/10.3389/fevo.2022.810156.Aquatic environments encompass the world’s most extensive habitats, rich with sounds produced by a diversity of animals. Passive acoustic monitoring (PAM) is an increasingly accessible remote sensing technology that uses hydrophones to listen to the underwater world and represents an unprecedented, non-invasive method to monitor underwater environments. This information can assist in the delineation of biologically important areas via detection of sound-producing species or characterization of ecosystem type and condition, inferred from the acoustic properties of the local soundscape. At a time when worldwide biodiversity is in significant decline and underwater soundscapes are being altered as a result of anthropogenic impacts, there is a need to document, quantify, and understand biotic sound sources–potentially before they disappear. A significant step toward these goals is the development of a web-based, open-access platform that provides: (1) a reference library of known and unknown biological sound sources (by integrating and expanding existing libraries around the world); (2) a data repository portal for annotated and unannotated audio recordings of single sources and of soundscapes; (3) a training platform for artificial intelligence algorithms for signal detection and classification; and (4) a citizen science-based application for public users. Although individually, these resources are often met on regional and taxa-specific scales, many are not sustained and, collectively, an enduring global database with an integrated platform has not been realized. We discuss the benefits such a program can provide, previous calls for global data-sharing and reference libraries, and the challenges that need to be overcome to bring together bio- and ecoacousticians, bioinformaticians, propagation experts, web engineers, and signal processing specialists (e.g., artificial intelligence) with the necessary support and funding to build a sustainable and scalable platform that could address the needs of all contributors and stakeholders into the future.Support for the initial author group to meet, discuss, and build consensus on the issues within this manuscript was provided by the Scientific Committee on Oceanic Research, Monmouth University Urban Coast Institute, and Rockefeller Program for the Human Environment. The U.S. National Science Foundation supported the publication of this article through Grant OCE-1840868 to the Scientific Committee on Oceanic Research

    Deep learning based deep-sea automatic image enhancement and animal species classification

    Get PDF
    The automatic classification of marine species based on images is a challenging task for which multiple solutions have been increasingly provided in the past two decades. Oceans are complex ecosystems, difficult to access, and often the images obtained are of low quality. In such cases, animal classification becomes tedious. Therefore, it is often necessary to apply enhancement or pre-processing techniques to the images, before applying classification algorithms. In this work, we propose an image enhancement and classification pipeline that allows automated processing of images from benthic moving platforms. Deep-sea (870 m depth) fauna was targeted in footage taken by the crawler “Wally” (an Internet Operated Vehicle), within the Ocean Network Canada (ONC) area of Barkley Canyon (Vancouver, BC; Canada). The image enhancement process consists mainly of a convolutional residual network, capable of generating enhanced images from a set of raw images. The images generated by the trained convolutional residual network obtained high values in metrics for underwater imagery assessment such as UIQM (~ 2.585) and UCIQE (2.406). The highest SSIM and PSNR values were also obtained when compared to the original dataset. The entire process has shown good classification results on an independent test data set, with an accuracy value of 66.44% and an Area Under the ROC Curve (AUROC) value of 82.91%, which were subsequently improved to 79.44% and 88.64% for accuracy and AUROC respectively. These results obtained with the enhanced images are quite promising and superior to those obtained with the non-enhanced datasets, paving the strategy for the on-board real-time processing of crawler imaging, and outperforming those published in previous papers.This work was developed at Deusto Seidor S.A. (01015, Vitoria-Gasteiz, Spain) within the framework of the Tecnoterra (ICM-CSIC/UPC) and the following project activities: ARIM (Autonomous Robotic sea-floor Infrastructure for benthopelagic Monitoring); MarTERA ERA-Net Cofund; Centro para el Desarrollo Tecnológico Industrial, CDTI; and RESBIO (TEC2017-87861-R; Ministerio de Ciencia, Innovación y Universidades). This work was supported by the Centro para el Desarrollo Tecnológico Industrial (CDTI) (Grant No. EXP 00108707 / SERA-20181020)

    Tracking fish abundance by underwater image recognition

    Get PDF
    Marine cabled video-observatories allow the non-destructive sampling of species at frequencies and durations that have never been attained before. Nevertheless, the lack of appropriate methods to automatically process video imagery limits this technology for the purposes of ecosystem monitoring. Automation is a prerequisite to deal with the huge quantities of video footage captured by cameras, which can then transform these devices into true autonomous sensors. In this study, we have developed a novel methodology that is based on genetic programming for content-based image analysis. Our aim was to capture the temporal dynamics of fish abundance. We processed more than 20,000 images that were acquired in a challenging real-world coastal scenario at the OBSEA-EMSO testing-site. The images were collected at 30-min. frequency, continuously for two years, over day and night. The highly variable environmental conditions allowed us to test the effectiveness of our approach under changing light radiation, water turbidity, background confusion, and bio-fouling growth on the camera housing. The automated recognition results were highly correlated with the manual counts and they were highly reliable when used to track fish variations at different hourly, daily, and monthly time scales. In addition, our methodology could be easily transferred to other cabled video-observatories.Peer ReviewedPostprint (published version
    corecore