4 research outputs found

    Image Semantic Segmentation Based on High-Resolution Networks for Monitoring Agricultural Vegetation

    Get PDF
    In the article, recognition of state of agricultural vegetation from aerial photographs at various spatial resolutions was considered. Proposed approach is based on a semantic segmentation using convolutional neural networks. Two variants of High-Resolution network architecture (HRNet) are described and used. These neural networks were trained and applied to aerial images of agricultural fields. In our experiments, accuracy of four land classes recognition (soil, healthy vegetation, diseased vegetation and other objects) was about 93-94%

    Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level prototype memory

    Full text link
    Semantic segmentation is a key technique involved in automatic interpretation of high-resolution remote sensing (HRS) imagery and has drawn much attention in the remote sensing community. Deep convolutional neural networks (DCNNs) have been successfully applied to the HRS imagery semantic segmentation task due to their hierarchical representation ability. However, the heavy dependency on a large number of training data with dense annotation and the sensitiveness to the variation of data distribution severely restrict the potential application of DCNNs for the semantic segmentation of HRS imagery. This study proposes a novel unsupervised domain adaptation semantic segmentation network (MemoryAdaptNet) for the semantic segmentation of HRS imagery. MemoryAdaptNet constructs an output space adversarial learning scheme to bridge the domain distribution discrepancy between source domain and target domain and to narrow the influence of domain shift. Specifically, we embed an invariant feature memory module to store invariant domain-level context information because the features obtained from adversarial learning only tend to represent the variant feature of current limited inputs. This module is integrated by a category attention-driven invariant domain-level context aggregation module to current pseudo invariant feature for further augmenting the pixel representations. An entropy-based pseudo label filtering strategy is used to update the memory module with high-confident pseudo invariant feature of current target images. Extensive experiments under three cross-domain tasks indicate that our proposed MemoryAdaptNet is remarkably superior to the state-of-the-art methods.Comment: 17 pages, 12 figures and 8 table

    Modelling tree biomass using direct and additive methods with point cloud deep learning in a temperate mixed forest

    Get PDF
    ABSTRACT: Airborne laser scanning (ALS) data has been widely used for total aboveground tree biomass (AGB) modelling, however, there is less research focusing on estimating specific tree biomass components (wood, branches, bark, and foliage). Knowledge about these biomass components is essential for carbon accounting, understanding forest nutrient cycling, and other applications. In this study, we compare additive AGB estimation (sum of estimated components) with direct AGB estimation using deep neural network (DNN) and random forest (RF) models. We utilise two point cloud DNNs: point-based Dynamic Graph Convolutional Neural Network (DGCNN) and Octree-based Convolutional Neural Network (OCNN). DNN and RF models were trained using a dataset comprised of 2336 sample plots from a mixed temperate forest in New Brunswick, Canada. Results indicate that additive AGB models perform similarly to direct models in terms of coefficient of determination (R2) and root-mean square error (RMSE), and reduced the mean absolute percentage error (MAPE) by 22% on average. Compared to RF, the DNNs provided a small improvement in performance, with OCNN explaining 5% more variation in the data (R2 = 0.76) and reducing MAPE by 20% on average. Overall, this study showcases the effectiveness of additive tree AGB models and highlights the potential of DNNs for enhanced AGB estimation. To further improve DNN performance, we recommend using larger training datasets, implementing hyperparameter optimization, and incorporating additional data such as multispectral imagery

    Multi-Scale Context Aggregation for Semantic Segmentation of Remote Sensing Images

    No full text
    The semantic segmentation of remote sensing images (RSIs) is important in a variety of applications. Conventional encoder-decoder-based convolutional neural networks (CNNs) use cascade pooling operations to aggregate the semantic information, which results in a loss of localization accuracy and in the preservation of spatial details. To overcome these limitations, we introduce the use of the high-resolution network (HRNet) to produce high-resolution features without the decoding stage. Moreover, we enhance the low-to-high features extracted from different branches separately to strengthen the embedding of scale-related contextual information. The low-resolution features contain more semantic information and have a small spatial size; thus, they are utilized to model the long-term spatial correlations. The high-resolution branches are enhanced by introducing an adaptive spatial pooling (ASP) module to aggregate more local contexts. By combining these context aggregation designs across different levels, the resulting architecture is capable of exploiting spatial context at both global and local levels. The experimental results obtained on two RSI datasets show that our approach significantly improves the accuracy with respect to the commonly used CNNs and achieves state-of-the-art performance
    corecore