151 research outputs found

    A convex formulation for hyperspectral image superresolution via subspace-based regularization

    Full text link
    Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images which combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector Total Variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the non-quadratic and non-smooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally "live" in a low-dimensional subspace and by tailoring the Split Augmented Lagrangian Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction Method of Multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state-of-the-art, as illustrated in a series of experiments with simulated and real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe

    Fusing Multiple Multiband Images

    Full text link
    We consider the problem of fusing an arbitrary number of multiband, i.e., panchromatic, multispectral, or hyperspectral, images belonging to the same scene. We use the well-known forward observation and linear mixture models with Gaussian perturbations to formulate the maximum-likelihood estimator of the endmember abundance matrix of the fused image. We calculate the Fisher information matrix for this estimator and examine the conditions for the uniqueness of the estimator. We use a vector total-variation penalty term together with nonnegativity and sum-to-one constraints on the endmember abundances to regularize the derived maximum-likelihood estimation problem. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. We solve the resultant convex optimization problem using the alternating direction method of multipliers. We utilize the circular convolution theorem in conjunction with the fast Fourier transform to alleviate the computational complexity of the proposed algorithm. Experiments with multiband images constructed from real hyperspectral datasets reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images

    Guided Nonlocal Patch Regularization and Efficient Filtering-Based Inversion for Multiband Fusion

    Full text link
    In multiband fusion, an image with a high spatial and low spectral resolution is combined with an image with a low spatial but high spectral resolution to produce a single multiband image having high spatial and spectral resolutions. This comes up in remote sensing applications such as pansharpening~(MS+PAN), hyperspectral sharpening~(HS+PAN), and HS-MS fusion~(HS+MS). Remote sensing images are textured and have repetitive structures. Motivated by nonlocal patch-based methods for image restoration, we propose a convex regularizer that (i) takes into account long-distance correlations, (ii) penalizes patch variation, which is more effective than pixel variation for capturing texture information, and (iii) uses the higher spatial resolution image as a guide image for weight computation. We come up with an efficient ADMM algorithm for optimizing the regularizer along with a standard least-squares loss function derived from the imaging model. The novelty of our algorithm is that by expressing patch variation as filtering operations and by judiciously splitting the original variables and introducing latent variables, we are able to solve the ADMM subproblems efficiently using FFT-based convolution and soft-thresholding. As far as the reconstruction quality is concerned, our method is shown to outperform state-of-the-art variational and deep learning techniques.Comment: Accepted in IEEE Transactions on Computational Imagin

    A Developed Algorithm for Automating the Multiple Bands Multiple Endmember Selection of Hyperion data Applied on Central of Cairo, Egypt

    Get PDF
    This study attempts to provide an answer regarding the utility of Hyperion imagery in mapping urban settings in developed countries. The authors present a novel method for extracting quantitative land cover information at the sub-pixel level from hyperspectral or Hyperion imagery. The proposed method is based on the multiple endmember spectral mixture (MESMA) proposed by Roberts et al. (1998b), but extends it to handle the high-dimensional pixels characterizing hyperspectral images. The proposed method utilizes a multiband multiple endmember spectral mixture analysis (Multiband MESMA) model that allows for both spectral bands and endmembers to vary on a per-pixel basis across a hyperspectral image. The goal is to select an optimal subset of spectral bands that maximizes spectral separability among a candidate set of endmembers for a given pixel, and accordingly to minimize spectral confusion among modeled endmembers and increase the accuracy and physical representativeness of derived fractions for that pixel. The authors develop a tool to automate this method and test its utility in a case study using a Hyperion image of Central Cairo, Egypt. The EO-1 Hyperion hyperspectral sensor is the only source of hyperspectral data currently available for Cairo, unlike cities in Europe and North America, where multiple sources of such data generally exist. The study scene represents a very heterogeneous landscape and has an ecological footprint of a complex range of interrelated socioeconomic, environmental and urban dynamics. The results of this study show that Hyperion data, with its rich spectral information, can help address some of the limitations in automated mapping that are reported by previous studies. For this, proper bands and endmembers are selected and used within a multiple endmember, with a multiple-band SMA process to determine the best Root Mean Square Error (RMSE) and abundance percentages. This results in a better mapping of land cover extricated from hyperspectral imagery (Hyperion). Keywords: Spectral Mixture Analysis, Hyperspectral Data, Hyperion Data, Cairo, Egyp

    Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-based algorithms

    Full text link
    We propose a novel approach for hyperspectral super-resolution, that is based on low-rank tensor approximation for a coupled low-rank multilinear (Tucker) model. We show that the correct recovery holds for a wide range of multilinear ranks. For coupled tensor approximation, we propose two SVD-based algorithms that are simple and fast, but with a performance comparable to the state-of-the-art methods. The approach is applicable to the case of unknown spatial degradation and to the pansharpening problem.Comment: IEEE Transactions on Signal Processing, Institute of Electrical and Electronics Engineers, in Pres

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Online Graph-Based Change Point Detection in Multiband Image Sequences

    Full text link
    The automatic detection of changes or anomalies between multispectral and hyperspectral images collected at different time instants is an active and challenging research topic. To effectively perform change-point detection in multitemporal images, it is important to devise techniques that are computationally efficient for processing large datasets, and that do not require knowledge about the nature of the changes. In this paper, we introduce a novel online framework for detecting changes in multitemporal remote sensing images. Acting on neighboring spectra as adjacent vertices in a graph, this algorithm focuses on anomalies concurrently activating groups of vertices corresponding to compact, well-connected and spectrally homogeneous image regions. It fully benefits from recent advances in graph signal processing to exploit the characteristics of the data that lie on irregular supports. Moreover, the graph is estimated directly from the images using superpixel decomposition algorithms. The learning algorithm is scalable in the sense that it is efficient and spatially distributed. Experiments illustrate the detection and localization performance of the method

    Robust fusion of multi-band images with different spatial and spectral resolutions for change detection

    Get PDF
    Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through different kinds of sensors. More precisely, this paper addresses the problem of detecting changes between two multiband optical images characterized by different spatial and spectral resolutions. This sensor dissimilarity introduces additional issues in the context of operational change detection. To alleviate these issues, classical change detection methods are applied after independent preprocessing steps (e.g., resampling) used to get the same spatial and spectral resolutions for the pair of observed images. Nevertheless, these preprocessing steps tend to throw away relevant information. Conversely, in this paper, we propose a method that more effectively uses the available information by modeling the two observed images as spatial and spectral versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions. As they cover the same scene, these latent images are expected to be globally similar except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. This robust fusion problem is formulated as an inverse problem, which is iteratively solved using an efficient block-coordinate descent algorithm. The proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy
    • …
    corecore