1,870 research outputs found
Radiometrically-Accurate Hyperspectral Data Sharpening
Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy
Bayesian fusion of multispectral and hyperspectral images using a block coordinate descent method
International audienceThis paper studies a new Bayesian optimization algorithm for fusing hyperspectral and multispectral images. The hyperspectral image is supposed to be obtained by blurring and subsampling a high spatial and high spectral target image. The multispectral image is modeled as a spectral mixing version of the target image. By introducing appropriate priors for parameters and hyperparameters, the fusion problem is formulated within a Bayesian estimation frame-work, which is very convenient to model the noise and the target image. The high spatial resolution hyperspectral image is then inferred from its posterior distribution. To compute the Bayesian maximum a posteriori estimator associated with this posterior, an alternating direction method of multipliers within block coordinate descent algorithm is proposed. Simulation results demonstrate the efficiency of the proposed fusion method when compared with several state-of-the-art fusion techniques
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Fusion of multispectral and hyperspectral images based on sparse representation
National audienceThis paper presents an algorithm based on sparse representation for fusing hyperspectral and multispectral images. The observed images are assumed to be obtained by spectral or spatial degradations of the high resolution hyperspectral image to be recovered. Based on this forward model, the fusion process is formulated as an inverse problem whose solution is determined by optimizing an appropriate criterion. To incorporate additional spatial information within the objective criterion, a regularization term is carefully designed,relying on a sparse decomposition of the scene on a set of dictionaryies. The dictionaries and the corresponding supports of active coding coef�cients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved by iteratively optimizing with respect to the target image (using the alternating direction method of multipliers) and the coding coefcients. Simulation results demonstrate the ef�ciency of the proposed fusion method when compared with the state-of-the-art
Bayesian Fusion of Multi-Band Images
International audienceThis paper presents a Bayesian fusion technique for remotely sensed multi-band images. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical considerations is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced within a Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques
Robust fusion of multi-band images with different spatial and spectral resolutions for change detection
Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through different kinds of sensors. More precisely, this paper addresses the problem of detecting changes between two multiband optical images characterized by different spatial and spectral resolutions. This sensor dissimilarity introduces additional issues in the context of operational change detection. To alleviate these issues, classical change detection methods are applied after independent preprocessing steps (e.g., resampling) used to get the same spatial and spectral resolutions for the pair of observed images. Nevertheless, these preprocessing steps tend to throw away relevant information. Conversely, in this paper, we propose a method that more effectively uses the available information by modeling the two observed images as spatial and spectral versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions. As they cover the same scene, these latent images are expected to be globally similar except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. This robust fusion problem is formulated as an inverse problem, which is iteratively solved using an efficient block-coordinate descent algorithm. The proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy
Hyperspectral Imaging for Landmine Detection
This PhD thesis aims at investigating the possibility to detect landmines using hyperspectral imaging. Using this technology, we are able to acquire at each pixel of the image spectral data in hundreds of wavelengths. So, at each pixel we obtain a reflectance spectrum that is used as fingerprint to identify the materials in each pixel, and mainly in our project help us to detect the presence of landmines.
The proposed process works as follows: a preconfigured drone (hexarotor or octorotor) will carry the hyperspectral camera. This programmed drone is responsible of flying over the contaminated area in order to take images from a safe distance. Various image processing techniques will be used to treat the image in order to isolate the landmine from the surrounding. Once the presence of a mine or explosives is suspected, an alarm signal is sent to the base station giving information about the type of the mine, its location and the clear path that could be taken by the mine removal team in order to disarm the mine.
This technology has advantages over the actually used techniques:
• It is safer because it limits the need of humans in the searching process and gives the opportunity to the demining team to detect the mines while they are in a safe region.
• It is faster. A larger area could be cleared in a single day by comparison with demining techniques
• This technique can be used to detect at the same time objects other than mines such oil or minerals.
First, a presentation of the problem of landmines that is expanding worldwide referring to some statistics from the UN organizations is provided. In addition, a brief presentation of different types of landmines is shown. Unfortunately, new landmines are well camouflaged and are mainly made of plastic in order to make their detection using metal detectors harder. A summary of all landmine detection techniques is shown to give an idea about the advantages and disadvantages of each technique.
In this work, we give an overview of different projects that worked on the detection of landmines using hyperspectral imaging. We will show the main results achieved in this field and future work to be done in order to make this technology effective.
Moreover, we worked on different target detection algorithms in order to achieve high probability of detection with low false alarm rate. We tested different statistical and linear unmixing based methods. In addition, we introduced the use of radial basis function neural networks in order to detect landmines at subpixel level. A comparative study between different detection methods will be shown in the thesis.
A study of the effect of dimensionality reduction using principal component analysis prior to classification is also provided. The study shows the dependency between the two steps (feature extraction and target detection). The selection of target detection algorithm will define if feature extraction in previous phase is necessary.
A field experiment has been done in order to study how the spectral signature of landmine will change depending on the environment in which the mine is planted. For this, we acquired the spectral signature of 6 types of landmines in different conditions: in Lab where specific source of light is used; in field where mines are covered by grass; and when mines are buried in soil. The results of this experiment are very interesting. The signature of two types of landmines are used in the simulations. They are a database necessary for supervised detection of landmines. Also we extracted some spectral characteristics of landmines that would help us to distinguish mines from background
A Bayesian fusion model for space-time reconstruction of finely resolved velocities in turbulent flows from low resolution measurements
The study of turbulent flows calls for measurements with high resolution both
in space and in time. We propose a new approach to reconstruct
High-Temporal-High-Spatial resolution velocity fields by combining two sources
of information that are well-resolved either in space or in time, the
Low-Temporal-High-Spatial (LTHS) and the High-Temporal-Low-Spatial (HTLS)
resolution measurements. In the framework of co-conception between sensing and
data post-processing, this work extensively investigates a Bayesian
reconstruction approach using a simulated database. A Bayesian fusion model is
developed to solve the inverse problem of data reconstruction. The model uses a
Maximum A Posteriori estimate, which yields the most probable field knowing the
measurements. The DNS of a wall-bounded turbulent flow at moderate Reynolds
number is used to validate and assess the performances of the present approach.
Low resolution measurements are subsampled in time and space from the fully
resolved data. Reconstructed velocities are compared to the reference DNS to
estimate the reconstruction errors. The model is compared to other conventional
methods such as Linear Stochastic Estimation and cubic spline interpolation.
Results show the superior accuracy of the proposed method in all
configurations. Further investigations of model performances on various range
of scales demonstrate its robustness. Numerical experiments also permit to
estimate the expected maximum information level corresponding to limitations of
experimental instruments.Comment: 15 pages, 6 figure
- …