1,019 research outputs found
Compressive Sensing for PAN-Sharpening
Based on compressive sensing framework and sparse reconstruction technology, a new pan-sharpening method, named Sparse Fusion of Images (SparseFI, pronounced as sparsify), is proposed in [1]. In this paper, the proposed SparseFI algorithm is validated using UltraCam and WorldView-2 data. Visual and statistic analysis show superior performance of SparseFI compared to the existing conventional pan-sharpening methods in general, i.e. rich in spatial information and less spectral distortion. Moreover, popular quality assessment metrics are employed to explore the dependency on regularization parameters and evaluate the efficiency of various sparse reconstruction toolboxes
Scene-adapted plug-and-play algorithm with convergence guarantees
Recent frameworks, such as the so-called plug-and-play, allow us to leverage
the developments in image denoising to tackle other, and more involved,
problems in image processing. As the name suggests, state-of-the-art denoisers
are plugged into an iterative algorithm that alternates between a denoising
step and the inversion of the observation operator. While these tools offer
flexibility, the convergence of the resulting algorithm may be difficult to
analyse. In this paper, we plug a state-of-the-art denoiser, based on a
Gaussian mixture model, in the iterations of an alternating direction method of
multipliers and prove the algorithm is guaranteed to converge. Moreover, we
build upon the concept of scene-adapted priors where we learn a model targeted
to a specific scene being imaged, and apply the proposed method to address the
hyperspectral sharpening problem
Target-adaptive CNN-based pansharpening
We recently proposed a convolutional neural network (CNN) for remote sensing
image pansharpening obtaining a significant performance gain over the state of
the art. In this paper, we explore a number of architectural and training
variations to this baseline, achieving further performance gains with a
lightweight network which trains very fast. Leveraging on this latter property,
we propose a target-adaptive usage modality which ensures a very good
performance also in the presence of a mismatch w.r.t. the training set, and
even across different sensors. The proposed method, published online as an
off-the-shelf software tool, allows users to perform fast and high-quality
CNN-based pansharpening of their own target images on general-purpose hardware
Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
Hyperspectral imaging can help better understand the characteristics of
different materials, compared with traditional image systems. However, only
high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS)
images can generally be captured at video rate in practice. In this paper, we
propose a model-based deep learning approach for merging an HrMS and LrHS
images to generate a high-resolution hyperspectral (HrHS) image. In specific,
we construct a novel MS/HS fusion model which takes the observation models of
low-resolution images and the low-rankness knowledge along the spectral mode of
HrHS image into consideration. Then we design an iterative algorithm to solve
the model by exploiting the proximal gradient method. And then, by unfolding
the designed algorithm, we construct a deep network, called MS/HS Fusion Net,
with learning the proximal operators and model parameters by convolutional
neural networks. Experimental results on simulated and real data substantiate
the superiority of our method both visually and quantitatively as compared with
state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure
Remote sensing image fusion via compressive sensing
In this paper, we propose a compressive sensing-based method to pan-sharpen the low-resolution multispectral (LRM) data, with the help of high-resolution panchromatic (HRP) data. In order to successfully implement the compressive sensing theory in pan-sharpening, two requirements should be satisfied: (i) forming a comprehensive dictionary in which the estimated coefficient vectors are sparse; and (ii) there is no correlation between the constructed dictionary and the measurement matrix. To fulfill these, we propose two novel strategies. The first is to construct a dictionary that is trained with patches across different image scales. Patches at different scales or equivalently multiscale patches provide texture atoms without requiring any external database or any prior atoms. The redundancy of the dictionary is removed through K-singular value decomposition (K-SVD). Second, we design an iterative
l1-l2
minimization algorithm based on alternating direction method of multipliers (ADMM) to seek the sparse coefficient vectors. The proposed algorithm stacks missing high-resolution multispectral (HRM) data with the captured LRM data, so that the latter is used as a constraint for the estimation of the former during the process of seeking the representation coefficients. Three datasets are used to test the performance of the proposed method. A comparative study between the proposed method and several state-of-the-art ones shows its effectiveness in dealing with complex structures of remote sensing imagery
- …