233 research outputs found

    Image Segmentation and Classification of Marine Organisms

    Get PDF
    To automate the arduous task of identifying and classifying images through their domain expertise, pioneers in the field of machine learning and computer vision invented many algorithms and pre-processing techniques. The process of classification is flexible with many user and domain specific alterations. These techniques are now being used to classify marine organisms to study and monitor their populations. Despite advancements in the field of programming languages and machine learning, image segmentation and classification for unlabeled data still needs improvement. The purpose of this project is to explore the various pre-processing techniques and classification algorithms that help cluster and classify images and hence choose the best parameters for identifying the various marine species present in an image

    Monitoring marine pollution for carbon neutrality through a deep learning method with multi-source data fusion

    Get PDF
    IntroductionMarine pollution can have a significant impact on the blue carbon, which finally affect the ocean’s ability to sequester carbon and contribute to achieving carbon neutrality. Marine pollution is a complex problem that requires a great deal of time and effort to measure. Existing machine learning algorithms cannot effectively solve the detection time problem and provide limited accuracy. Moreover, marine pollution can come from a variety of sources. However, most of the existing research focused on a single ocean indicator to analyze marine pollution. In this study, two indicators, marine organisms and debris, are used to create a more complete picture of the extent and impact of pollution in the ocean.MethodsTo effectively recognize different marine objects in the complex marine environment, we propose an integrated data fusion approach where deep convolutional neural networks (CNNs) are combined to conduct underwater object recognition. Through this multi-source data fusion approach, the accuracy of object recognition is significantly improved. After feature extraction, four machine and deep learning classifiers’ performances are used to train on features extracted with deep CNNs.ResultsThe results show that VGG-16 achieves better performance than other feature extractors when detecting marine organisms. When detecting marine debris, AlexNet outperforms other deep CNNs. The results also show that the LSTM classifier with VGG-16 for detecting marine organisms outperforms other deep learning models.DiscussionFor detecting marine debris, the best performance was observed with the AlexNet extractor, which obtained the best classification result with an LSTM. This information can be used to develop policies and practices aimed at reducing pollution and protecting marine environments for future generations

    Using Machine Vision to Estimate Fish Length from Images using Regional Convolutional Neural Networks

    Get PDF
    An image can encode date, time, location and camera information as metadata and implicitly encodes species information and data on human activity, for example the size distribution of fish removals. Accurate length estimates can be made from images using a fiducial marker; however, their manual extraction is time-consuming and estimates are inaccurate without control over the imaging system. This article presents a methodology which uses machine vision to estimate the total length (TL) of a fusiform fish (European sea bass). Three regional convolutional neural networks (R-CNN) were trained from public images. Images of European sea bass were captured with a fiducial marker with three non-specialist cameras. Images were undistorted using the intrinsic lens properties calculated for the camera in OpenCV; then TL was estimated using machine vision (MV) to detect both marker and subject. MV performance was evaluated for the three R-CNNs under downsampling and rotation of the captured images. Each R-CNN accurately predicted the location of fish in test images (mean intersection over union, 93%) and estimates of TL were accurate, with percent mean bias error (%MBE [95% CIs]) = 2.2% [2.0, 2.4]). Detections were robust to horizontal flipping and downsampling. TL estimates at absolute image rotations >20° became increasingly inaccurate but %MBE [95% CIs] was reduced to −0.1% [−0.2, 0.1] using machine learning to remove outliers and model bias. Machine vision can classify and derive measurements of species from images without specialist equipment. It is anticipated that ecological researchers and managers will make increasing use of MV where image data are collected (e.g. in remote electronic monitoring, virtual observations, wildlife surveys and morphometrics) and MV will be of particular utility where large volumes of image data are gathered

    Vision-based techniques for automatic marine plankton classification

    Get PDF
    Plankton are an important component of life on Earth. Since the 19th century, scientists have attempted to quantify species distributions using many techniques, such as direct counting, sizing, and classification with microscopes. Since then, extraordinary work has been performed regarding the development of plankton imaging systems, producing a massive backlog of images that await classification. Automatic image processing and classification approaches are opening new avenues for avoiding time-consuming manual procedures. While some algorithms have been adapted from many other applications for use with plankton, other exciting techniques have been developed exclusively for this issue. Achieving higher accuracy than that of human taxonomists is not yet possible, but an expeditious analysis is essential for discovering the world beyond plankton. Recent studies have shown the imminent development of real-time, in situ plankton image classification systems, which have only been slowed down by the complex implementations of algorithms on low-power processing hardware. This article compiles the techniques that have been proposed for classifying marine plankton, focusing on automatic methods that utilize image processing, from the beginnings of this field to the present day.Funding for open access charge: Universidad de Málaga / CBUA. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. The authors wish to thank Alonso Hernández-Guerra for his frm support in the development of oceanographic technology. Special thanks to Laia Armengol for her help in the domain of plankton. This study has been funded by Feder of the UE through the RES-COAST Mac-Interreg pro ject (MAC2/3.5b/314). We also acknowledge the European Union projects SUMMER (Grant Agreement 817806) and TRIATLAS (Grant Agreement 817578) from the Horizon 2020 Research and Innovation Programme and the Ministry of Science from the Spanish Government through the Project DESAFÍO (PID2020-118118RB-I00)

    Predicting and identifying antimicrobial resistance in the marine environment using AI and machine learning algorithms.

    Get PDF
    Antimicrobial resistance (AMR) is an increasingly critical public health issue necessitating precise and efficient methodologies to achieve prompt results. The accurate and early detection of AMR is crucial, as its absence can pose life-threatening risks to diverse ecosystems, including the marine environment. The spread of AMR among microorganisms in the marine environment can have significant consequences, potentially impacting human life directly. This study focuses on evaluating the diameters of the disc diffusion zone and employs artificial intelligence and machine learning techniques such as image segmentation, data augmentation, and deep learning methods to enhance accuracy and predict microbial resistance

    Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories

    Get PDF
    Corrección de una afiliación en Sensors 2023, 23, 16. https://doi.org/10.3390/s23010016An understanding of marine ecosystems and their biodiversity is relevant to sustainable use of the goods and services they offer. Since marine areas host complex ecosystems, it is important to develop spatially widespread monitoring networks capable of providing large amounts of multiparametric information, encompassing both biotic and abiotic variables, and describing the ecological dynamics of the observed species. In this context, imaging devices are valuable tools that complement other biological and oceanographic monitoring devices. Nevertheless, large amounts of images or movies cannot all be manually processed, and autonomous routines for recognizing the relevant content, classification, and tagging are urgently needed. In this work, we propose a pipeline for the analysis of visual data that integrates video/image annotation tools for defining, training, and validation of datasets with video/image enhancement and machine and deep learning approaches. Such a pipeline is required to achieve good performance in the recognition and classification tasks of mobile and sessile megafauna, in order to obtain integrated information on spatial distribution and temporal dynamics. A prototype implementation of the analysis pipeline is provided in the context of deep-sea videos taken by one of the fixed cameras at the LoVe Ocean Observatory network of Lofoten Islands (Norway) at 260 m depth, in the Barents Sea, which has shown good classification results on an independent test dataset with an accuracy value of 76.18% and an area under the curve (AUC) value of 87.59%.This work was developed within the framework of the Tecnoterra (ICM-CSIC/UPC) and the following project activities: ARIM (Autonomous Robotic Sea-Floor Infrastructure for Benthopelagic Monitoring; MarTERA ERA-Net Cofound) and RESBIO (TEC2017-87861-R; Ministerio de Ciencia, Innovación y Universidades)

    Crown-of-Thorns Starfish Detection by state-of-the-art YOLOv5

    Get PDF
    Crown-of-Thorns Starfish outbreaks appeared many decades ago which have threatened the overall health of the coral reefs in Australia’s Great Barrier Reef. This indeed has a direct impact on the reef-associated marine organisms and severely damages the biological diversity and resilience of the habitat structure. Yet, COTS surveillance has been carried out for long but completely by human effort, which is absolutely ineffective and prone to errors. There emerges an urge to apply recent advanced technology to deploy unmanned underwater vehicles for detecting the target object and taking suitable actions accordingly. Existing challenges include but not limited to the scarcity of qualified underwater images as well as superior detection algorithms which is able to satisfy major criteria such as light-weight, high accuracy and speedy detection. There are not many papers in this specific area of research and they can’t fulfill these expectations completely. In this thesis, we propose a deep learning based model to automatically detect the COTS in order to prevent the outbreak and minimize coral mortality in the Reef. As such, we use CSIRO COTS Dataset of underwater images from the Swain Reefs region to train our model. Our goal is to recognize as many starfish as possible while keeping the accuracy high enough to ensure the reliability of the solution. We provide a comprehensive background of the problem, and an intensive literature review in this area of research. In addition, to better align with our task, we use F2 score as the main evaluation metrics in our MS COCO- based evaluation scheme. That is, an average F2 is computed from the results obtained at different IoU thresholds, from 0.3 to 0.8 with a step size of 0.05. In our implementation, we experiment with model architecture selection, online image augmentation, confidence score threshold calibration and hyperparameter tuning to improve the testing performance in the model inference stage. Eventually, we present our novel COTS detector as a promising solution for the stated challenge

    Detection of Apple Leaf Diseases using Faster R-CNN

    Get PDF
    Image recognition-based automated disease detection systems play an important role in the early detection of plant leaf diseases. In this study, an apple leaf disease detection system was proposed using Faster Region-Based Convolutional Neural Network (Faster R-CNN) with Inception v2 architecture. Applications for the detection of diseases were carried out in apple orchards in Yalova, Turkey. Leaf images were obtained from different apple orchards for two years. In our observations, it was determined that apple trees of Yalova had black spot (venturia inaequalis) disease. The proposed system in the study detects a large number of leaves in an image, then successfully classifies diseased and healthy ones. The disease detection system trained has achieved an average of 84.5% accuracy

    UnitModule: A Lightweight Joint Image Enhancement Module for Underwater Object Detection

    Full text link
    Underwater object detection faces the problem of underwater image degradation, which affects the performance of the detector. Underwater object detection methods based on noise reduction and image enhancement usually do not provide images preferred by the detector or require additional datasets. In this paper, we propose a plug-and-play Underwater joint image enhancement Module (UnitModule) that provides the input image preferred by the detector. We design an unsupervised learning loss for the joint training of UnitModule with the detector without additional datasets to improve the interaction between UnitModule and the detector. Furthermore, a color cast predictor with the assisting color cast loss and a data augmentation called Underwater Color Random Transfer (UCRT) are designed to improve the performance of UnitModule on underwater images with different color casts. Extensive experiments are conducted on DUO for different object detection models, where UnitModule achieves the highest performance improvement of 2.6 AP for YOLOv5-S and gains the improvement of 3.3 AP on the brand-new test set (URPCtest). And UnitModule significantly improves the performance of all object detection models we test, especially for models with a small number of parameters. In addition, UnitModule with a small number of parameters of 31K has little effect on the inference speed of the original object detection model. Our quantitative and visual analysis also demonstrates the effectiveness of UnitModule in enhancing the input image and improving the perception ability of the detector for object features
    corecore