9,860 research outputs found

    PSSA: PCA-domain superpixelwise singular spectral analysis for unsupervised hyperspectral image classification.

    Get PDF
    Although supervised classification of hyperspectral images (HSI) has achieved success in remote sensing, its applications in real scenarios are often constrained, mainly due to the insufficiently available or lack of labelled data. As a result, unsupervised HSI classification based on data clustering is highly desired, yet it generally suffers from high computational cost and low classification accuracy, especially in large datasets. To tackle these challenges, a novel unsupervised spatial-spectral HSI classification method is proposed. By combining the entropy rate superpixel segmentation (ERS), superpixel-based principal component analysis (PCA), and PCA-domain 2D singular spectral analysis (SSA), both the efficacy and efficiency of feature extraction are improved, followed by the anchor-based graph clustering (AGC) for effective classification. Experiments on three publicly available and five self-collected aerial HSI datasets have fully demonstrated the efficacy of the proposed PCA-domain superpixelwise SSA (PSSA) method, with a gain of 15–20% in terms of the overall accuracy, in comparison to a few state-of-the-art methods. In addition, as an extra outcome, the HSI dataset we acquired is provided freely online

    Ranked List Loss for Deep Metric Learning

    Full text link
    The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the gallery to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods.Comment: Accepted to T-PAMI. Therefore, to read the offical version, please go to IEEE Xplore. Fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DM
    • …
    corecore