22 research outputs found

    MACU-Net for Semantic Segmentation of Fine-Resolution Remotely Sensed Images

    Get PDF
    Semantic segmentation of remotely sensed images plays an important role in land resource management, yield estimation, and economic assessment. U-Net, a deep encoder-decoder architecture, has been used frequently for image segmentation with high accuracy. In this Letter, we incorporate multi-scale features generated by different layers of U-Net and design a multi-scale skip connected and asymmetric-convolution-based U-Net (MACU-Net), for segmentation using fine-resolution remotely sensed images. Our design has the following advantages: (1) The multi-scale skip connections combine and realign semantic features contained in both low-level and high-level feature maps; (2) the asymmetric convolution block strengthens the feature representation and feature extraction capability of a standard convolution layer. Experiments conducted on two remotely sensed datasets captured by different satellite sensors demonstrate that the proposed MACU-Net transcends the U-Net, U-NetPPL, U-Net 3+, amongst other benchmark approaches

    Tea Garden Detection from High-Resolution Imagery Using a Scene-Based Framework

    Get PDF
    Tea cultivation has a long history in China, and it is one of the pillar industries of the Chinese agricultural economy. It is therefore necessary to map tea gardens for their ongoing management. However, the previous studies have relied on fieldwork to achieve this task, which is time-consuming. In this paper, we propose a framework to map tea gardens using high-resolution remotely sensed imagery, including three scene-based methods: the bag-of-visual-words (BOVW) model, supervised latent Dirichlet allocation (sLDA), and the unsupervised convolutional neural network (UCNN). These methods can develop direct and holistic semantic representations for tea garden scenes composed of multiple sub-objects, thus they are more suitable than the traditional pixel-based or object-based methods, which focus on the local characteristics of pixels or objects. In the experiments undertaken in this study, the three different methods were tested on four datasets from Longyan (Oolong tea), Hangzhou (Longjing tea), and Puer (Puer tea). All the methods achieved a good performance, both quantitatively and visually, and the UCNN outperformed the other methods. Moreover, it was found that the addition of textural features improved the accuracy of the BOVW and sLDA models, but had no effect on the UCNN

    Generalized differential morphological profiles for remote sensing image classification

    Get PDF
    Differential morphological profiles (DMPs) are widely used for the spatial/structural feature extraction and classification of remote sensing images. They can be regarded as the shape spectrum, depicting the response of the image structures related to different scales and sizes of the structural elements (SEs). DMPs are defined as the difference of morphological profiles (MPs) between consecutive scales. However, traditional DMPs can ignore discriminative information for features that are across the scales in the profiles. To solve this problem, we propose scale-span differential profiles, i.e., generalized DMPs (GDMPs), to obtain the entire differential profiles. GDMPs can describe the complete shape spectrum and measure the difference between arbitrary scales, which is more appropriate for representing the multiscale characteristics and complex landscapes of remote sensing image scenes. Subsequently, the random forest (RF) classifier is applied to interpret GDMPs considering its robustness for high-dimensional data and ability of evaluating the importance of variables. Meanwhile, the RF "out-of-bag" error can be used to quantify the importance of each channel of GDMPs and select the most discriminative information in the entire profiles. Experiments conducted on three well-known hyperspectral data sets as well as an additional World View-2 data are used to validate the effectiveness of GDMPs compared to the traditional DMPs. The results are promising as GDMPs can significantly outperform the traditional one, as it is capable of adequately exploring the multiscale morphological information

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Spectral-spatial classification of hyperspectral data using spectral-domain local binary patterns

    Get PDF
    It is of great interest in spectral-spatial features classification for hyperspectral images (HSI) with high spatial resolution. This paper presents a novel Spectral-spatial classification method for improving hyperspectral image classification accuracy. Specifically, a new texture feature extraction algorithm exploits spatial texture feature from spectrum is proposed. It employs local binary patterns (LBPs) in order to extract the image texture feature with respect to spectrum information diversity (SID) to measure the differences of spectrum information. The classifier adopted in this work is support vector machine (SVM) because of its outstanding classification performances. In this paper, two real hyperspectral image datasets are used for testing the performance of the proposed method. Our experimental results from real hyperspectral images indicate that the proposed framework can enhance the classification accuracy compare to traditional alternatives

    Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification

    Get PDF
    (This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)Aerial image classification has become popular and has attracted extensive research efforts in recent decades. The main challenge lies in its very high spatial resolution but relatively insufficient spectral information. To this end, spatial-spectral feature extraction is a popular strategy for classification. However, parameter determination for that feature extraction is usually time-consuming and depends excessively on experience. In this paper, an automatic spatial feature extraction approach based on image raster and segmental vector data cross-analysis is proposed for the classification of very high spatial resolution (VHSR) aerial imagery. First, multi-resolution segmentation is used to generate strongly homogeneous image objects and extract corresponding vectors. Then, to automatically explore the region of a ground target, two rules, which are derived from Tobler’s First Law of Geography (TFL) and a topological relationship of vector data, are integrated to constrain the extension of a region around a central object. Third, the shape and size of the extended region are described. A final classification map is achieved through a supervised classifier using shape, size, and spectral features. Experiments on three real aerial images of VHSR (0.1 to 0.32 m) are done to evaluate effectiveness and robustness of the proposed approach. Comparisons to state-of-the-art methods demonstrate the superiority of the proposed method in VHSR image classification.Peer Reviewe
    corecore