17,340 research outputs found
Comparing distances for quality assessment of fused images
International audienceThis communication deals with the fusion of panchromatic (PAN) images of high spatial resolution and multispectral (MS) images of lower resolution in order to synthesize MS images at high resolution. These fused images should be as identical as possible to images that would have been acquired by the corresponding space borne sensor if it were fit with this high resolution. A protocol for the assessment of the quality of the fused images was discussed by the EARSeL Special Interest Group ‘‘data fusion'' in 2004. It evaluates how much fused images comply with two properties, on multispectral and monospectral viewpoints. The compliance is measured through a set of distances between the set of fused images and the multispectral reference images. This communication analyses the distances that are found in literature. First of all, it proposes a classification of these distances into seven categories. Then it shows some relations between several distances through an empirical study. Finally, a typical choice of distances is proposed in order to assess most aspects of fused images
Edge Preservation in Ikonos Multispectral and Panchromatic Imagery Pan-sharpening
International audienceIn Ikonos imagery, both multispectral (MS) and panchromatic (PAN) images are provided with different spatial and spectral resolutions. Multispectral classification detects object classes only according to the spectral property of the pixel. Panchromatic image segmentation enables the extraction of detailed objects, like road networks, that are useful in map updating in Geographical Information Systems (GIS), environmental inspection, transportation and urban planning, etc. Therefore, the fusion of a PAN image with MS images is a key issue in applications that require both high spatial and high spectral resolutions. The fused image provides higher classification accuracy. To extract, for example, urban road networks in pan-sharpened images, edge information from the PAN image is used to eliminate the misclassified objects. If the PAN image is not available, then an edge map is extracted from the pan-sharpened images, and therefore the quality of this map depends on the fusion process of PAN and MS images. In a pan-sharpening process, before fusing, the MS images are resampled to the same pixel sizes as the PAN images and this upsampling impacts subsequent processing. In this work, we demonstrate that the interpolation method, used to resample the MS images, is very important in preserving the edges in the pan-sharpened images
A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images
Various and different methods can be used to produce high-resolution
multispectral images from high-resolution panchromatic image (PAN) and
low-resolution multispectral images (MS), mostly on the pixel level. The
Quality of image fusion is an essential determinant of the value of processing
images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image.
However, the jury is still out of fused image's benefits if it compared with
its original images. In addition, there is a lack of measures for assessing the
objective quality of the spatial resolution for the fusion methods. So, an
objective quality of the spatial resolution assessment for fusion images is
required. Therefore, this paper describes a new approach proposed to estimate
the spatial resolution improve by High Past Division Index (HPDI) upon
calculating the spatial-frequency of the edge regions of the image and it deals
with a comparison of various analytical techniques for evaluating the Spatial
quality, and estimating the colour distortion added by image fusion including:
MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to
concentrate on the comparison of various image fusion techniques based on pixel
and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497
Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
Hyperspectral imaging can help better understand the characteristics of
different materials, compared with traditional image systems. However, only
high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS)
images can generally be captured at video rate in practice. In this paper, we
propose a model-based deep learning approach for merging an HrMS and LrHS
images to generate a high-resolution hyperspectral (HrHS) image. In specific,
we construct a novel MS/HS fusion model which takes the observation models of
low-resolution images and the low-rankness knowledge along the spectral mode of
HrHS image into consideration. Then we design an iterative algorithm to solve
the model by exploiting the proximal gradient method. And then, by unfolding
the designed algorithm, we construct a deep network, called MS/HS Fusion Net,
with learning the proximal operators and model parameters by convolutional
neural networks. Experimental results on simulated and real data substantiate
the superiority of our method both visually and quantitatively as compared with
state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure
A Pansharpening Based on the Non-Subsampled Contourlet Transform and Convolutional Autoencoder: Application to QuickBird Imagery
This paper presents a pansharpening technique based on the non-subsampled contourlet transform (NSCT) and convolutional autoencoder (CAE). NSCT is exceptionally proficient at presenting orientation information and capturing the internal geometry of objects. First, it’s used to decompose the multispectral (MS) and panchromatic (PAN) images into high-frequency and low-frequency components using the same number of decomposition levels. Second, a CAE network is trained to generate original low-frequency PAN images from their spatially degraded versions. Low-resolution multispectral images are then fed into the trained convolutional autoencoder network to generate estimated high-resolution multispectral images. Third, another CAE network is trained to generate original high-frequency PAN images from their spatially degraded versions. The result of low-pass CAE is fed to the trained high-pass CAE to generate estimated high-resolution multispectral images. The final pan-sharpened image is accomplished by injecting the detailed map of the spectral bands into the corresponding estimated high-resolution multispectral bands. The proposed method is tested on QuickBird datasets and compared with some existing pan-sharpening techniques. Objective and subjective results demonstrate the efficiency of the proposed method
A Multiple-Expert Binarization Framework for Multispectral Images
In this work, a multiple-expert binarization framework for multispectral
images is proposed. The framework is based on a constrained subspace selection
limited to the spectral bands combined with state-of-the-art gray-level
binarization methods. The framework uses a binarization wrapper to enhance the
performance of the gray-level binarization. Nonlinear preprocessing of the
individual spectral bands is used to enhance the textual information. An
evolutionary optimizer is considered to obtain the optimal and some suboptimal
3-band subspaces from which an ensemble of experts is then formed. The
framework is applied to a ground truth multispectral dataset with promising
results. In addition, a generalization to the cross-validation approach is
developed that not only evaluates generalizability of the framework, it also
provides a practical instance of the selected experts that could be then
applied to unseen inputs despite the small size of the given ground truth
dataset.Comment: 12 pages, 8 figures, 6 tables. Presented at ICDAR'1
- …