74 research outputs found

    SULoRA: Subspace Unmixing with Low-Rank Attribute Embedding for Hyperspectral Data Analysis

    Get PDF
    To support high-level analysis of spaceborne imaging spectroscopy (hyperspectral) imagery, spectral unmixing has been gaining significance in recent years. However, from the inevitable spectral variability, caused by illumination and topography change, atmospheric effects and so on, makes it difficult to accurately estimate abundance maps in spectral unmixing. Classical unmixing methods, e.g. linear mixing model (LMM), extended linear mixing model (ELMM), fail to robustly handle this issue, particularly facing complex spectral variability. To this end, we propose a subspace-based unmixing model using low-rank learning strategy, called subspace unmixing with low-rank attribute embedding (SULoRA), robustly against spectral variability in inverse problems of hyperspectral unmixing. Unlike those previous approaches that unmix the spectral signatures directly in original space, SULoRA is a general subspace unmixing framework that jointly estimates subspace projections and abundance maps in order to find a ‘raw’ subspace which is more suitable for carrying out the unmixing procedure. More importantly, we model such ‘raw’ subspace with low-rank attribute embedding. By projecting the original data into a low-rank subspace, SULoRA can effectively address various spectral variabilities in spectral unmixing. Furthermore, we adopt an alternating direction method of multipliers (ADMM) based to solve the resulting optimization problem. Extensive experiments on synthetic and real datasets are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatial-spectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data

    Get PDF
    This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods

    An Overview of Multimodal Remote Sensing Data Fusion: From Image to Feature, from Shallow to Deep

    Get PDF
    With the ever-growing availability of different remote sens-ing (RS) products from both satellite and airborne platforms,simultaneous processing and interpretation of multimodal RSdata have shown increasing significance in the RS field. Dif-ferent resolutions, contexts, and sensors of multimodal RSdata enable the identification and recognition of the materialslying on the earth’s surface at a more accurate level by de-scribing the same object from different points of the view. Asa result, the topic on multimodal RS data fusion has graduallyemerged as a hotspot research direction in recent years.This paper aims at presenting an overview of multimodalRS data fusion in several mainstream applications, which canbe roughly categorized by 1) image pansharpening, 2) hyper-spectral and multispectral image fusion, 3) multimodal fea-ture learning, and (4) crossmodal feature learning. For eachtopic, we will briefly describe what is the to-be-addressed re-search problem related to multimodal RS data fusion and givethe representative and state-of-the-art models from shallow todeep perspectives

    The Dusty and Extremely Red Progenitor of the Type II Supernova 2023ixf in Messier 101

    Full text link
    Stars with initial masses in the range of 8-25 solar masses are thought to end their lives as hydrogen-rich supernova (SNe II). Based on the pre-explosion images of Hubble Space Telescope (\textit{HST}) and \textit{Spitzer} Space Telescope, we place tight constraints on the progenitor candidate of type IIP SN 2023ixf in Messier 101. Fitting of the spectral energy distribution (SED) of its progenitor with dusty stellar spectral models results in an estimation of the effective temperature as 3090 K, making it the coolest SN progenitor ever discovered. The luminosity is estimated as log(L/L/L⊙_{\odot})∼4.8\sim4.8, consistent with a red supergiant (RSG) star with an initial mass of 12−1+2^{+2}_{-1} M⊙_{\odot}. The derived mass loss rate (6-9×10−6\times10^{-6} M⊙_{\odot} yr−1^{-1}) is much lower than that inferred from the flash spectroscopy of the SN, suggesting that the progenitor experienced a sudden increase in mass loss when approaching the final explosion. In the mid-infrared color diagram, the progenitor star is found to show a significant deviation from the range of regular RSGs, but is close to some extreme RSGs and super asymptotic giant branch (sAGB) stars. Thus, SN 2023ixf may belong to a rare subclass of electron-captured supernova for an origin of sAGB progenitor.Comment: 6 figures; under review by Science Bulleti

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Learning to Propagate Labels on Graphs: An Iterative Multitask Regression Framework for Semi-supervised Hyperspectral Dimensionality Reduction

    Get PDF
    Hyperspectral dimensionality reduction (HDR), an important preprocessing step prior to high-level data analysis, has been garnering growing attention in the remote sensing community. Although a variety of methods, both unsupervised and supervised models, have been proposed for this task, yet the discriminative ability in feature representation still remains limited due to the lack of a powerful tool that effectively exploits the labeled and unlabeled data in the HDR process. A semi-supervised HDR approach, called iterative multitask regression (IMR), is proposed in this paper to address this need. IMR aims at learning a low-dimensional subspace by jointly considering the labeled and unlabeled data, and also bridging the learned subspace with two regression tasks: labels and pseudo-labels initialized by a given classifier. More significantly, IMR dynamically propagates the labels on a learnable graph and progressively refines pseudo-labels, yielding a well-conditioned feedback system. Experiments conducted on three widely-used hyperspectral image datasets demonstrate that the dimension-reduced features learned by the proposed IMR framework with respect to classification or recognition accuracy are superior to those of related state-of-the-art HDR approaches

    Spatial-Spectral Manifold Embedding of Hyperspectral Data

    Get PDF
    In recent years, hyperspectral imaging, also known as imaging spectroscopy, has been paid an increasing interest in geoscience and remote sensing community. Hyperspectral imagery is characterized by very rich spectral information, which enables us to recognize the materials of interest lying on the surface of the Earth more easier. We have to admit, however, that high spectral dimension inevitably brings some drawbacks, such as expensive data storage and transmission, information redundancy, etc. Therefore, to reduce the spectral dimensionality effectively and learn more discriminative spectral low-dimensional embedding, in this paper we propose a novel hyperspectral embedding approach by simultaneously considering spatial and spectral information, called spatialspectral manifold embedding (SSME). Beyond the pixel-wise spectral embedding approaches, SSME models the spatial and spectral information jointly in a patch-based fashion. SSME not only learns the spectral embedding by using the adjacency matrix obtained by similarity measurement between spectral signatures, but also models the spatial neighbours of a target pixel in hyperspectral scene by sharing the same weights (or edges) in the process of learning embedding. Classification is explored as a potential strategy to quantitatively evaluate the performance of learned embedding representations. Classification is explored as a potential application for quantitatively evaluating the performance of these hyperspectral embedding algorithms. Extensive experiments conducted on the widely-used hyperspectral datasets demonstrate the superiority and effectiveness of the proposed SSME as compared to several state-of-the-art embedding methods

    Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction

    Get PDF
    Conventional nonlinear subspace learning techniques (e.g., manifold learning) usually introduce some drawbacks in explainability (explicit mapping) and cost-effectiveness (linearization), generalization capability (out-of-sample), and representability (spatial-spectral discrimination). To overcome these shortcomings, a novel linearized subspace analysis technique with spatial-spectral manifold alignment is developed for a semi-supervised hyperspectral dimensionality reduction (HDR), called joint and progressive subspace analysis (JPSA). The JPSA learns a high-level, semantically meaningful, joint spatial-spectral feature representation from hyperspectral data by 1) jointly learning latent subspaces and a linear classifier to find an effective projection direction favorable for classification; 2) progressively searching several intermediate states of subspaces to approach an optimal mapping from the original space to a potential more discriminative subspace; 3) spatially and spectrally aligning manifold structure in each learned latent subspace in order to preserve the same or similar topological property between the compressed data and the original data. A simple but effective classifier, i.e., nearest neighbor (NN), is explored as a potential application for validating the algorithm performance of different HDR approaches. Extensive experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets: Indian Pines (92.98\%) and the University of Houston (86.09\%) in comparison with previous state-of-the-art HDR methods. The demo of this basic work (i.e., ECCV2018) is openly available at https://github.com/danfenghong/ECCV2018_J-Play

    Circumstellar Material Ejected Violently by A Massive Star Immediately before its Death

    Full text link
    Type II supernovae represent the most common stellar explosions in the Universe, for which the final stage evolution of their hydrogen-rich massive progenitors towards core-collapse explosion are elusive. The recent explosion of SN 2023ixf in a very nearby galaxy, Messier 101, provides a rare opportunity to explore this longstanding issue. With the timely high-cadence flash spectra taken within 1-5 days after the explosion, we can put stringent constraints on the properties of the surrounding circumstellar material around this supernova. Based on the rapid fading of the narrow emission lines and luminosity/profile of Hα\rm H\alpha emission at very early times, we estimate that the progenitor of SN 2023ixf lost material at a mass-loss rate M˙≈6×10−4 M⊙ yr−1\dot{\rm M} \approx 6 \times 10^{-4}\, \rm M_{\odot}\,yr^{-1} over the last 2-3 years before explosion. This close-by material, moving at a velocity vw≈55 km s−1v_{\rm w} \approx 55\rm \, km\,s^{-1}, accumulates a compact CSM shell at the radius smaller than 7×10147 \times 10^{14} cm from the progenitor. Given the high mass-loss rate and relatively large wind velocity presented here, together with the pre-explosion observations made about two decades ago, the progenitor of SN 2023ixf could be a short-lived yellow hypergiant that evolved from a red supergiant shortly before the explosion.Comment: 10 pages, 6 figures in main body, accepted for publication in Science Bulleti
    • …
    corecore