112,237 research outputs found
Registration and Fusion of Multi-Spectral Images Using a Novel Edge Descriptor
In this paper we introduce a fully end-to-end approach for multi-spectral
image registration and fusion. Our method for fusion combines images from
different spectral channels into a single fused image by different approaches
for low and high frequency signals. A prerequisite of fusion is a stage of
geometric alignment between the spectral bands, commonly referred to as
registration. Unfortunately, common methods for image registration of a single
spectral channel do not yield reasonable results on images from different
modalities. For that end, we introduce a new algorithm for multi-spectral image
registration, based on a novel edge descriptor of feature points. Our method
achieves an accurate alignment of a level that allows us to further fuse the
images. As our experiments show, we produce a high quality of multi-spectral
image registration and fusion under many challenging scenarios
A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images
Various and different methods can be used to produce high-resolution
multispectral images from high-resolution panchromatic image (PAN) and
low-resolution multispectral images (MS), mostly on the pixel level. The
Quality of image fusion is an essential determinant of the value of processing
images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image.
However, the jury is still out of fused image's benefits if it compared with
its original images. In addition, there is a lack of measures for assessing the
objective quality of the spatial resolution for the fusion methods. So, an
objective quality of the spatial resolution assessment for fusion images is
required. Therefore, this paper describes a new approach proposed to estimate
the spatial resolution improve by High Past Division Index (HPDI) upon
calculating the spatial-frequency of the edge regions of the image and it deals
with a comparison of various analytical techniques for evaluating the Spatial
quality, and estimating the colour distortion added by image fusion including:
MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to
concentrate on the comparison of various image fusion techniques based on pixel
and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497
New applications of Spectral Edge image fusion
In this paper, we present new applications of the Spectral Edge image fusion method. The Spectral Edge image fusion algorithm creates a result which combines details from any number of multispectral input images with natural color information from a visible spectrum image. Spectral Edge image fusion is a derivative–based technique, which creates an output fused image with gradients which are an ideal combination of those of the multispectral input images and the input visible color image. This produces both maximum detail and natural colors. We present two new applications of Spectral Edge image fusion. Firstly, we fuse RGB–NIR information from a sensor with a modified Bayer pattern, which captures visible and near–infrared image information on a single CCD. We also present an example of RGB–thermal image fusion, using a thermal camera attached to a smartphone, which captures both visible and low–resolution thermal images. These new results may be useful for computational photography and surveillance applications. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only
Quality assessment by region in spot images fused by means dual-tree complex wavelet transform
This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process
Toward reduction of artifacts in fused images
Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho
- …
