34 research outputs found

    Semi-Supervised Adversarial Domain Adaptation for Seagrass Detection Using Multispectral Images in Coastal Areas

    Get PDF
    Seagrass form the basis for critically important marine ecosystems. Previously, we implemented a deep convolutional neural network (CNN) model to detect seagrass in multispectral satellite images of three coastal habitats in northern Florida. However, a deep CNN model trained at one location usually does not generalize to other locations due to data distribution shifts. In this paper, we developed a semi-supervised domain adaptation method to generalize a trained deep CNN model to other locations for seagrass detection. First, we utilized a generative adversarial network loss to align marginal data distribution between source domain and target domain using unlabeled data from both data domains. Second, we used a few labelled samples from the target domain to align class specific data distributions between the two domains, based on the contrastive semantic alignment loss. We achieved the best results in 28 out of 36 scenarios as compared to other state-of-the-art domain adaptation methods

    Deep Learning Approaches for Seagrass Detection in Multispectral Imagery

    Get PDF
    Seagrass forms the basis for critically important marine ecosystems. Seagrass is an important factor to balance marine ecological systems, and it is of great interest to monitor its distribution in different parts of the world. Remote sensing imagery is considered as an effective data modality based on which seagrass monitoring and quantification can be performed remotely. Traditionally, researchers utilized multispectral satellite images to map seagrass manually. Automatic machine learning techniques, especially deep learning algorithms, recently achieved state-of-the-art performances in many computer vision applications. This dissertation presents a set of deep learning models for seagrass detection in multispectral satellite images. It also introduces novel domain adaptation approaches to adapt the models for new locations and for temporal image series. In Chapter 3, I compare a deep capsule network (DCN) with a deep convolutional neural network (DCNN) for seagrass detection in high-resolution multispectral satellite images. These methods are tested on three satellite images in Florida coastal areas and obtain comparable performances. In addition, I also propose a few-shot deep learning strategy to transfer knowledge learned by DCN from one location to the others for seagrass detection. In Chapter 4, I develop a semi-supervised domain adaptation method to generalize a trained DCNN model to multiple locations for seagrass detection. First, the model utilizes a generative adversarial network (GAN) to align marginal distribution of data in the source domain to that in the target domain using unlabeled data from both domains. Second, it uses a few labeled samples from the target domain to align class-specific data distributions between the two. The model achieves the best results in 28 out of 36 scenarios as compared to other state-of-the-art domain adaptation methods. In Chapter 5, I develop a semantic segmentation method for seagrass detection in multispectral time-series images. First, I train a state-of-the-art image segmentation method using an active learning approach where I use the DCNN classifier in the loop. Then, I develop an unsupervised domain adaptation (UDA) algorithm to detect seagrass across temporal images. I also extend our unsupervised domain adaptation work for seagrass detection across locations. In Chapter 6, I present an automated bathymetry estimation model based on multispectral satellite images. Bathymetry refers to the depth of the ocean floor and contributes a predominant role in identifying marine species in seawater. Accurate bathymetry information of coastal areas will facilitate seagrass detection by reducing false positives because seagrass usually do not grow beyond a certain depth. However, bathymetry information of most parts of the world is obsolete or missing. Traditional bathymetry measurement systems require extensive labor efforts. I utilize an ensemble machine learning-based approach to estimate bathymetry based on a few in-situ sonar measurements and evaluate the proposed model in three coastal locations in Florida

    Flood Detection Using Multi-Modal and Multi-Temporal Images: A Comparative Study

    Get PDF
    Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain an accurate estimation of a flooded area. In this paper, we compared several traditional machine-learning approaches for flood detection including multi-layer perceptron (MLP), support vector machine (SVM), deep convolutional neural network (DCNN) with recent domain adaptation-based approaches, based on a multi-modal and multi-temporal image dataset. Specifically, we used SPOT-5 and RADAR images from the flood event that occurred in November 2000 in Gloucester, UK. Experimental results show that the domain adaptation-based approach, semi-supervised domain adaptation (SSDA) with 20 labeled data samples, achieved slightly better values of the area under the precision-recall (PR) curve (AUC) of 0.9173 and F1 score of 0.8846 than those by traditional machine approaches. However, SSDA required much less labor for ground-truth labeling and should be recommended in practice

    Semi-supervised segmentation for coastal monitoring seagrass using RPA imagery

    Get PDF
    Intertidal seagrass plays a vital role in estimating the overall health and dynamics of coastal environments due to its interaction with tidal changes. However, most seagrass habitats around the globe have been in steady decline due to human impacts, disturbing the already delicate balance in the environmental conditions that sustain seagrass. Miniaturization of multi-spectral sensors has facilitated very high resolution mapping of seagrass meadows, which significantly improves the potential for ecologists to monitor changes. In this study, two analytical approaches used for classifying intertidal seagrass habitats are compared—Object-based Image Analysis (OBIA) and Fully Convolutional Neural Networks (FCNNs). Both methods produce pixel-wise classifications in order to create segmented maps. FCNNs are an emerging set of algorithms within Deep Learning. Conversely, OBIA has been a prominent solution within this field, with many studies leveraging in-situ data and multiresolution segmentation to create habitat maps. This work demonstrates the utility of FCNNs in a semi-supervised setting to map seagrass and other coastal features from an optical drone survey conducted at Budle Bay, Northumberland, England. Semi-supervision is also an emerging field within Deep Learning that has practical benefits of achieving state of the art results using only subsets of labelled data. This is especially beneficial for remote sensing applications where in-situ data is an expensive commodity. For our results, we show that FCNNs have comparable performance with the standard OBIA method used by ecologists

    Improving accuracy and efficiency in seagrass detection using state-of-the-art AI techniques

    Get PDF
    Seagrasses provide a wide range of ecosystem services in coastal marine environments. Despite their ecological and economic importance, these species are declining because of human impact. This decline has driven the need for monitoring and mapping to estimate the overall health and dynamics of seagrasses in coastal environments, often based on underwater images. However, seagrass detection from underwater digital images is not a trivial task; it requires taxonomic expertise and is time-consuming and expensive. Recently automatic approaches based on deep learning have revolutionised object detection performance in many computer vision applications, and there has been interest in applying this to automated seagrass detection from imagery. Deep learning–based techniques reduce the need for hardcore feature extraction by domain experts which is required in machine learning-based techniques. This study presents a YOLOv5-based one-stage detector and an EfficientDetD7–based two-stage detector for detecting seagrass, in this case, Halophila ovalis, one of the most widely distributed seagrass species. The EfficientDet-D7–based seagrass detector achieves the highest mAP of 0.484 on the ECUHO-2 dataset and mAP of 0.354 on the ECUHO-1 dataset, which are about 7% and 5% better than the state-of-the-art Halophila ovalis detection performance on those datasets, respectively. The proposed YOLOv5-based detector achieves an average inference time of 0.077 s and 0.043 s respectively which are much lower than the state-of-the-art approach on the same datasets

    Impact of Atmospheric Correction on Classification and Quantification of Seagrass Density from WorldView-2 Imagery

    Get PDF
    Mapping the seagrass distribution and density in the underwater landscape can improve global Blue Carbon estimates. However, atmospheric absorption and scattering introduce errors in space-based sensors’ retrieval of sea surface reflectance, affecting seagrass presence, density, and above-ground carbon (AGCseagrass) estimates. This study assessed atmospheric correction’s impact on mapping seagrass using WorldView-2 satellite imagery from Saint Joseph Bay, Saint George Sound, and Keaton Beach in Florida, USA. Coincident in situ measurements of water-leaving radiance (Lw), optical properties, and seagrass leaf area index (LAI) were collected. Seagrass classification and the retrieval of LAI were compared after empirical line height (ELH) and dark-object subtraction (DOS) methods were used for atmospheric correction. DOS left residual brightness in the blue and green bands but had minimal impact on the seagrass classification accuracy. However, the brighter reflectance values reduced LAI retrievals by up to 50% compared to ELH-corrected images and ground-based observations. This study offers a potential correction for LAI underestimation due to incomplete atmospheric correction, enhancing the retrieval of seagrass density and above-ground Blue Carbon from WorldView-2 imagery without in situ observations for accurate atmospheric interference correction

    Providing a Framework for Seagrass Mapping in United States Coastal Ecosystems Using High Spatial Resolution Satellite Imagery

    Get PDF
    Seagrasses have been widely recognized for their ecosystem services, but traditional seagrass monitoring approaches emphasizing ground and aerial observations are costly, time-consuming, and lack standardization across datasets. This study leveraged satellite imagery from Maxar\u27s WorldView-2 and WorldView-3 high spatial resolution, commercial satellite platforms to provide a consistent classification approach for monitoring seagrass at eleven study areas across the continental United States, representing geographically, ecologically, and climatically diverse regions. A single satellite image was selected at each of the eleven study areas to correspond temporally to reference data representing seagrass coverage and was classified into four general classes: land, seagrass, no seagrass, and no data. Satellite-derived seagrass coverage was then compared to reference data using either balanced agreement, the Mann-Whitney U test, or the Kruskal-Wallis test, depending on the format of the reference data used for comparison. Balanced agreement ranged from 58% to 86%, with better agreement between reference- and satellite-indicated seagrass absence (specificity ranged from 88% to 100%) than between reference- and satellite-indicated seagrass presence (sensitivity ranged from 17% to 73%). Results of the Mann-Whitney U and Kruskal-Wallis tests demonstrated that satellite-indicated seagrass percentage cover had moderate to large correlations with reference-indicated seagrass percentage cover, indicative of moderate to strong agreement between datasets. Satellite classification performed best in areas of dense, continuous seagrass compared to areas of sparse, discontinuous seagrass and provided a suitable spatial representation of seagrass distribution within each study area. This study demonstrates that the same methods can be applied across scenes spanning varying seagrass bioregions, atmospheric conditions, and optical water types, which is a significant step toward developing a consistent, operational approach for mapping seagrass coverage at the national and global scales. Accompanying this manuscript are instructional videos describing the processing workflow, including data acquisition, data processing, and satellite image classification. These instructional videos may serve as a management tool to complement field- and aerial-based mapping efforts for monitoring seagrass ecosystems

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Vertical Artifacts in High-Resolution WorldView-2 and Worldview-3 Satellite Imagery of Aquatic Systems

    Get PDF
    Satellite image artefacts are features that appear in an image but not in the original imaged object and can negatively impact the interpretation of satellite data. Vertical artefacts are linear features oriented in the along-track direction of an image system and can present as either banding or striping; banding are features with a consistent width, and striping are features with inconsistent widths. This study used high-resolution data from DigitalGlobeʻs (now Maxar) WorldView-3 satellite collected at Lake Okeechobee, Florida (FL), on 30 August 2017. This study investigated the impact of vertical artefacts on both at-sensor radiance and a spectral index for an aquatic target as WorldView-3 was primarily designed as a land sensor. At-sensor radiance measured by six of WorldView-3ʻs eight spectral bands exhibited banding, more specifically referred to as non-uniformity, at a width corresponding to the multispectral detector sub-arrays that comprise the WorldView-3 focal plane. At-sensor radiance measured by the remaining two spectral bands, red and near-infrared (NIR) #1, exhibited striping. Striping in these spectral bands can be attributed to their time delay integration (TDI) settings at the time of image acquisition, which were optimized for land. The impact of vertical striping on a spectral index leveraging the red, red edge, and NIR spectral bands—referred to here as the NIR maximum chlorophyll index (MCINIR)—was investigated. Temporally similar imagery from the European Space Agencyʻs Sentinel-3 and Sentinel-2 satellites were used as baseline references of expected chlorophyll values across Lake Okeechobee as neither Sentinel-3 nor Sentinel-2 imagery showed striping. Striping was highly prominent in the MCINIR product generated using WorldView-3 imagery, as noise in the at-sensor radiance exceeded any signal of chlorophyll in the image. Adjusting the image acquisition parameters for future tasking of WorldView-3 or the functionally similar WorldView-2 satellite may alleviate these artefacts. To test this, an additional WorldView-3 image was acquired at Lake Okeechobee, FL, on 26 May 2021 in which the TDI settings and scan line rate were adjusted to improve the signal-to-noise ratio. While some evidence of non-uniformity remained, striping was no longer noticeable in the MCINIR product. Future image tasking over aquatic targets should employ these updated image acquisition parameters. Since the red and NIR #1 spectral bands are critical for inland and coastal water applications, archived images not collected using these updated settings may be limited in their potential for analysis of aquatic variables that require these two spectral bands to derive
    corecore