66 research outputs found

    SULoRA: Subspace Unmixing with Low-Rank Attribute Embedding for Hyperspectral Data Analysis

    Get PDF
    To support high-level analysis of spaceborne imaging spectroscopy (hyperspectral) imagery, spectral unmixing has been gaining significance in recent years. However, from the inevitable spectral variability, caused by illumination and topography change, atmospheric effects and so on, makes it difficult to accurately estimate abundance maps in spectral unmixing. Classical unmixing methods, e.g. linear mixing model (LMM), extended linear mixing model (ELMM), fail to robustly handle this issue, particularly facing complex spectral variability. To this end, we propose a subspace-based unmixing model using low-rank learning strategy, called subspace unmixing with low-rank attribute embedding (SULoRA), robustly against spectral variability in inverse problems of hyperspectral unmixing. Unlike those previous approaches that unmix the spectral signatures directly in original space, SULoRA is a general subspace unmixing framework that jointly estimates subspace projections and abundance maps in order to find a ‘raw’ subspace which is more suitable for carrying out the unmixing procedure. More importantly, we model such ‘raw’ subspace with low-rank attribute embedding. By projecting the original data into a low-rank subspace, SULoRA can effectively address various spectral variabilities in spectral unmixing. Furthermore, we adopt an alternating direction method of multipliers (ADMM) based to solve the resulting optimization problem. Extensive experiments on synthetic and real datasets are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Learning to Propagate Labels on Graphs: An Iterative Multitask Regression Framework for Semi-supervised Hyperspectral Dimensionality Reduction

    Get PDF
    Hyperspectral dimensionality reduction (HDR), an important preprocessing step prior to high-level data analysis, has been garnering growing attention in the remote sensing community. Although a variety of methods, both unsupervised and supervised models, have been proposed for this task, yet the discriminative ability in feature representation still remains limited due to the lack of a powerful tool that effectively exploits the labeled and unlabeled data in the HDR process. A semi-supervised HDR approach, called iterative multitask regression (IMR), is proposed in this paper to address this need. IMR aims at learning a low-dimensional subspace by jointly considering the labeled and unlabeled data, and also bridging the learned subspace with two regression tasks: labels and pseudo-labels initialized by a given classifier. More significantly, IMR dynamically propagates the labels on a learnable graph and progressively refines pseudo-labels, yielding a well-conditioned feedback system. Experiments conducted on three widely-used hyperspectral image datasets demonstrate that the dimension-reduced features learned by the proposed IMR framework with respect to classification or recognition accuracy are superior to those of related state-of-the-art HDR approaches

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatialspectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction

    Get PDF
    Conventional nonlinear subspace learning techniques (e.g., manifold learning) usually introduce some drawbacks in explainability (explicit mapping) and cost-effectiveness (linearization), generalization capability (out-of-sample), and representability (spatial-spectral discrimination). To overcome these shortcomings, a novel linearized subspace analysis technique with spatial-spectral manifold alignment is developed for a semi-supervised hyperspectral dimensionality reduction (HDR), called joint and progressive subspace analysis (JPSA). The JPSA learns a high-level, semantically meaningful, joint spatial-spectral feature representation from hyperspectral data by 1) jointly learning latent subspaces and a linear classifier to find an effective projection direction favorable for classification; 2) progressively searching several intermediate states of subspaces to approach an optimal mapping from the original space to a potential more discriminative subspace; 3) spatially and spectrally aligning manifold structure in each learned latent subspace in order to preserve the same or similar topological property between the compressed data and the original data. A simple but effective classifier, i.e., nearest neighbor (NN), is explored as a potential application for validating the algorithm performance of different HDR approaches. Extensive experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets: Indian Pines (92.98\%) and the University of Houston (86.09\%) in comparison with previous state-of-the-art HDR methods. The demo of this basic work (i.e., ECCV2018) is openly available at https://github.com/danfenghong/ECCV2018_J-Play

    Circumstellar Material Ejected Violently by A Massive Star Immediately before its Death

    Full text link
    Type II supernovae represent the most common stellar explosions in the Universe, for which the final stage evolution of their hydrogen-rich massive progenitors towards core-collapse explosion are elusive. The recent explosion of SN 2023ixf in a very nearby galaxy, Messier 101, provides a rare opportunity to explore this longstanding issue. With the timely high-cadence flash spectra taken within 1-5 days after the explosion, we can put stringent constraints on the properties of the surrounding circumstellar material around this supernova. Based on the rapid fading of the narrow emission lines and luminosity/profile of Hα\rm H\alpha emission at very early times, we estimate that the progenitor of SN 2023ixf lost material at a mass-loss rate M˙≈6×10−4 M⊙ yr−1\dot{\rm M} \approx 6 \times 10^{-4}\, \rm M_{\odot}\,yr^{-1} over the last 2-3 years before explosion. This close-by material, moving at a velocity vw≈55 km s−1v_{\rm w} \approx 55\rm \, km\,s^{-1}, accumulates a compact CSM shell at the radius smaller than 7×10147 \times 10^{14} cm from the progenitor. Given the high mass-loss rate and relatively large wind velocity presented here, together with the pre-explosion observations made about two decades ago, the progenitor of SN 2023ixf could be a short-lived yellow hypergiant that evolved from a red supergiant shortly before the explosion.Comment: 10 pages, 6 figures in main body, accepted for publication in Science Bulleti

    Observations of SN 2017ein Reveal Shock Breakout Emission and a Massive Progenitor Star for a Type Ic Supernova

    Get PDF
    We present optical and ultraviolet observations of nearby Type Ic supernova (SN Ic) SN 2017ein, as well as a detailed analysis of its progenitor properties from both the early-time observations and the prediscovery Hubble Space Telescope (HST) images. The optical light curves started from within 1 day to similar to 275 days after explosion, and optical spectra range from similar to 2 days to similar to 90 days after explosion. Compared to other normal SNe Ic like SN 2007gr and SN 2013ge, SN 2017ein seems to have more prominent C II absorption and higher expansion velocities in early phases, suggestive of relatively lower ejecta mass. The earliest photometry obtained for SN 2017ein shows indications of shock cooling. The best fit obtained by including a shock-cooling component gives an estimate of the envelope mass as similar to 0.02 M-circle dot and stellar radius as 8 +/- 4 R-circle dot. Examining the pre-explosion images taken with the HST WFPC2, we find that the SN position coincides with a luminous and blue point-like source, with an extinction-corrected absolute magnitude of M-V similar to -8.2 mag and M-I similar to -7.7 mag. Comparisons of the observations to the theoretical models indicate that the counterpart source was either a single W-R star or a binary whose members had high initial masses, or a young compact star cluster. To further distinguish between different scenarios requires revisiting the site of the progenitor with HST after the SN fades away
    • …
    corecore