918 research outputs found

    Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Hopkinson, B. M., King, A. C., Owen, D. P., Johnson-Roberson, M., Long, M. H., & Bhandarkar, S. M. Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. PLoS One, 15(3), (2020): e0230671, doi: 10.1371/journal.pone.0230671.Coral reefs are biologically diverse and structurally complex ecosystems, which have been severally affected by human actions. Consequently, there is a need for rapid ecological assessment of coral reefs, but current approaches require time consuming manual analysis, either during a dive survey or on images collected during a survey. Reef structural complexity is essential for ecological function but is challenging to measure and often relegated to simple metrics such as rugosity. Recent advances in computer vision and machine learning offer the potential to alleviate some of these limitations. We developed an approach to automatically classify 3D reconstructions of reef sections and assessed the accuracy of this approach. 3D reconstructions of reef sections were generated using commercial Structure-from-Motion software with images extracted from video surveys. To generate a 3D classified map, locations on the 3D reconstruction were mapped back into the original images to extract multiple views of the location. Several approaches were tested to merge information from multiple views of a point into a single classification, all of which used convolutional neural networks to classify or extract features from the images, but differ in the strategy employed for merging information. Approaches to merging information entailed voting, probability averaging, and a learned neural-network layer. All approaches performed similarly achieving overall classification accuracies of ~96% and >90% accuracy on most classes. With this high classification accuracy, these approaches are suitable for many ecological applications.This study was funded by grants from the Alfred P. Sloan Foundation (BMH, BR2014-049; https://sloan.org), and the National Science Foundation (MHL, OCE-1657727; https://www.nsf.gov). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Sparse Coral Classification Using Deep Convolutional Neural Networks

    Get PDF
    Autonomous repair of deep-sea coral reefs is a recent proposed idea to support the oceans ecosystem in which is vital for commercial fishing, tourism and other species. This idea can be operated through using many small autonomous underwater vehicles (AUVs) and swarm intelligence techniques to locate and replace chunks of coral which have been broken off, thus enabling re-growth and maintaining the habitat. The aim of this project is developing machine vision algorithms to enable an underwater robot to locate a coral reef and a chunk of coral on the seabed and prompt the robot to pick it up. Although there is no literature on this particular problem, related work on fish counting may give some insight into the problem. The technical challenges are principally due to the potential lack of clarity of the water and platform stabilization as well as spurious artifacts (rocks, fish, and crabs). We present an efficient sparse classification for coral species using supervised deep learning method called Convolutional Neural Networks (CNNs). We compute Weber Local Descriptor (WLD), Phase Congruency (PC), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors, which are employed to be supplementary channels (feature-based maps) besides basic spatial color channels (spatial-based maps) of coral input image, we also experiment state-of-art preprocessing underwater algorithms for image enhancement and color normalization and color conversion adjustment. Our proposed coral classification method is developed under MATLAB platform, and evaluated by two different coral datasets (University of California San Diego's Moorea Labeled Corals, and Heriot-Watt University's Atlantic Deep Sea).Comment: Thesis Submitted for the Degree of MSc Erasmus Mundus in Vision and Robotics (VIBOT 2014

    Classification of Benthic Habitat Based on Object with Support Vector Machines and Decision Tree Algorithm Using Spot-7 Multispectral Imagery in Harapan and Kelapa Island

    Full text link
    The research of object based image classification (OBIA) with machine learning algorithm for high resolution image in Indonesia is still limited especially for coral reef mapping, therefore further research needed for comparison in method and application of algorithms as alternative of classification. This research aims to map benthic habitat based on multiscale classification using OBIA method with support vector machine and decision tree algorithm in Harapan Island and Kelapa Island, Kepulauan Seribu. Segmentation was performed using a multiresolution segmentation algorithm with a scale factor of 15. The OBIA method is applied to atmospheric corrected images with a predefined benthic habitat classification scheme. The overall accuracy of SVM and DT algorithm implementations are 76.68% and 60.62%, respectively. The Z statistic value analysis obtained from the application of two algorithms used is 2.23, where this value indicates that the classification with SVM algorithm is significantly different from the DT algorithm. This research suggest that the OBIA technique could be a promise approach for mapping benthic habitats

    Ensembles of wrappers for automated feature selection in fish age classification

    Get PDF
    In feature selection, the most important features must be chosen so as to decrease the number thereof while retaining their discriminatory information. Within this context, a novel feature selection method based on an ensemble of wrappers is proposed and applied for automatically select features in fish age classification. The effectiveness of this procedure using an Atlantic cod database has been tested for different powerful statistical learning classifiers. The subsets based on few features selected, e.g. otolith weight and fish weight, are particularly noticeable given current biological findings and practices in fishery research and the classification results obtained with them outperforms those of previous studies in which a manual feature selection was performed.Peer ReviewedPostprint (author's final draft

    Semi-Automated Object-Based Classification of Coral Reef Habitat Using Discrete Choice Models

    Get PDF
    As for terrestrial remote sensing, pixel-based classifiers have traditionally been used to map coral reef habitats. For pixel-based classifiers, habitat assignment is based on the spectral or textural properties of each individual pixel in the scene. More recently, however, object-based classifications, those based on information from a set of contiguous pixels with similar properties, have found favor with the reef mapping community and are starting to be extensively deployed. Object-based classifiers have an advantage over pixel-based in that they are less compromised by the inevitable inhomogeneity in per-pixel spectral response caused, primarily, by variations in water depth. One aspect of the object-based classification workflow is the assignment of each image object to a habitat class on the basis of its spectral, textural, or geometric properties. While a skilled image interpreter can achieve this task accurately through manual editing, full or partial automation is desirable for large-scale reef mapping projects of the magnitude which are useful for marine spatial planning. To this end, this paper trials the use of multinomial logistic discrete choice models to classify coral reef habitats identified through object-based segmentation of satellite imagery. Our results suggest that these models can attain assignment accuracies of about 85%, while also reducing the time needed to produce the map, as compared to manual methods. Limitations of this approach include misclassification of image objects at the interface between some habitat types due to the soft gradation in nature between habitats, the robustness of the segmentation algorithm used, and the selection of a strong training dataset. Finally, due to the probabilistic nature of multinomial logistic models, the analyst can estimate a map of uncertainty associated with the habitat classifications. Quantifying uncertainty is important to the end-user when developing marine spatial planning scenarios and populating spatial models from reef habitat maps

    Seabed mapping in coastal shallow waters using high resolution multispectral and hyperspectral imagery

    Get PDF
    Coastal ecosystems experience multiple anthropogenic and climate change pressures. To monitor the variability of the benthic habitats in shallow waters, the implementation of effective strategies is required to support coastal planning. In this context, high-resolution remote sensing data can be of fundamental importance to generate precise seabed maps in coastal shallow water areas. In this work, satellite and airborne multispectral and hyperspectral imagery were used to map benthic habitats in a complex ecosystem. In it, submerged green aquatic vegetation meadows have low density, are located at depths up to 20 m, and the sea surface is regularly affected by persistent local winds. A robust mapping methodology has been identified after a comprehensive analysis of different corrections, feature extraction, and classification approaches. In particular, atmospheric, sunglint, and water column corrections were tested. In addition, to increase the mapping accuracy, we assessed the use of derived information from rotation transforms, texture parameters, and abundance maps produced by linear unmixing algorithms. Finally, maximum likelihood (ML), spectral angle mapper (SAM), and support vector machine (SVM) classification algorithms were considered at the pixel and object levels. In summary, a complete processing methodology was implemented, and results demonstrate the better performance of SVM but the higher robustness of ML to the nature of information and the number of bands considered. Hyperspectral data increases the overall accuracy with respect to the multispectral bands (4.7% for ML and 9.5% for SVM) but the inclusion of additional features, in general, did not significantly improve the seabed map quality.Peer ReviewedPostprint (published version

    Sparse Coral Classification Using Deep Convolutional Neural Networks

    Get PDF
    Autonomous repair of deep-sea coral reefs is a recent proposed idea to support the oceans ecosystem in which is vital for commercial fishing, tourism and other species. This idea can be operated through using many small autonomous underwater vehicles (AUVs) and swarm intelligence techniques to locate and replace chunks of coral which have been broken off, thus enabling re-growth and maintaining the habitat. The aim of this project is developing machine vision algorithms to enable an underwater robot to locate a coral reef and a chunk of coral on the seabed and prompt the robot to pick it up. Although there is no literature on this particular problem, related work on fish counting may give some insight into the problem. The technical challenges are principally due to the potential lack of clarity of the water and platform stabilization as well as spurious artifacts (rocks, fish, and crabs). We present an efficient sparse classification for coral species using supervised deep learning method called Convolutional Neural Networks (CNNs). We compute Weber Local Descriptor (WLD), Phase Congruency (PC), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors, which are employed to be supplementary channels (feature-based maps) besides basic spatial color channels (spatial-based maps) of coral input image, we also experiment state-of-art preprocessing underwater algorithms for image enhancement and color normalization and color conversion adjustment. Our proposed coral classification method is developed under MATLAB platform, and evaluated by two different coral datasets (University of California San Diego's Moorea Labeled Corals, and Heriot-Watt University's Atlantic Deep Sea)

    Automated interpretation of benthic stereo imagery

    Get PDF
    Autonomous benthic imaging, reduces human risk and increases the amount of collected data. However, manually interpreting these high volumes of data is onerous, time consuming and in many cases, infeasible. The objective of this thesis is to improve the scientific utility of the large image datasets. Fine-scale terrain complexity is typically quantified by rugosity and measured by divers using chains and tape measures. This thesis proposes a new technique for measuring terrain complexity from 3D stereo image reconstructions, which is non-contact and can be calculated at multiple scales over large spatial extents. Using robots, terrain complexity can be measured without endangering humans, beyond scuba depths. Results show that this approach is more robust, flexible and easily repeatable than traditional methods. These proposed terrain complexity features are combined with visual colour and texture descriptors and applied to classifying imagery. New multi-dataset feature selection methods are proposed for performing feature selection across multiple datasets, and are shown to improve the overall classification performance. The results show that the most informative predictors of benthic habitat types are the new terrain complexity measurements. This thesis presents a method that aims to reduce human labelling effort, while maximising classification performance by combining pre-clustering with active learning. The results support that utilising the structure of the unlabelled data in conjunction with uncertainty sampling can significantly reduce the number of labels required for a given level of accuracy. Typically 0.00001–0.00007% of image data is annotated and processed for science purposes (20–50 points in 1–2% of the images). This thesis proposes a framework that uses existing human-annotated point labels to train a superpixel-based automated classification system, which can extrapolate the classified results to every pixel across all the images of an entire survey

    Automated interpretation of benthic stereo imagery

    Get PDF
    Autonomous benthic imaging, reduces human risk and increases the amount of collected data. However, manually interpreting these high volumes of data is onerous, time consuming and in many cases, infeasible. The objective of this thesis is to improve the scientific utility of the large image datasets. Fine-scale terrain complexity is typically quantified by rugosity and measured by divers using chains and tape measures. This thesis proposes a new technique for measuring terrain complexity from 3D stereo image reconstructions, which is non-contact and can be calculated at multiple scales over large spatial extents. Using robots, terrain complexity can be measured without endangering humans, beyond scuba depths. Results show that this approach is more robust, flexible and easily repeatable than traditional methods. These proposed terrain complexity features are combined with visual colour and texture descriptors and applied to classifying imagery. New multi-dataset feature selection methods are proposed for performing feature selection across multiple datasets, and are shown to improve the overall classification performance. The results show that the most informative predictors of benthic habitat types are the new terrain complexity measurements. This thesis presents a method that aims to reduce human labelling effort, while maximising classification performance by combining pre-clustering with active learning. The results support that utilising the structure of the unlabelled data in conjunction with uncertainty sampling can significantly reduce the number of labels required for a given level of accuracy. Typically 0.00001–0.00007% of image data is annotated and processed for science purposes (20–50 points in 1–2% of the images). This thesis proposes a framework that uses existing human-annotated point labels to train a superpixel-based automated classification system, which can extrapolate the classified results to every pixel across all the images of an entire survey
    • …
    corecore