305 research outputs found
Implementation of Max Principle with PCA in Image Fusion for Surveillance and Navigation Application
Image fusion is the combination of two or more different images by using suitable algorithms to form an output image. It provides a useful tool to integrate multiple images into a composite image. In this paper, we present an approach that uses the principle component analysis (PCA) along with the selection of maximum pixel intensity to perform fusion. The entropy, mutual information and the universal index based measure are used to evaluate the performance of this fusion algorithm
Multi-Sensor Image Registration, Fusion and Dimension Reduction
With the development of future spacecraft formations comes a number of complex challenges such as maintaining precise relative position and specified attitudes, as well as being able to communicate with each other. More generally, with the advent of spacecraft formations, issues related to performing on-board and automatic data computing and analysis as well as decision planning and scheduling will figure among the most important requirements. Among those, automatic image registration, image fusion and dimension reduction represent intelligent technologies that would reduce mission costs,would enable autonomous decisions to be taken on-board, and would make formation flying adaptive, self-reliant, and cooperative. For both on-board and on-the-ground applications, the particular need for dimension reduction is two-fold, first to reduce the communication bandwidth, second as a pre-processing to make computations feasible,simpler and faster
Development and implementation of image fusion algorithms based on wavelets
Image fusion is a process of blending the complementary as well as the common features of a set of images, to generate a resultant image with superior information content in terms of subjective as well as objective analysis point of view. The objective of this research work is to develop some novel image fusion algorithms and their applications in various fields such as crack detection, multi spectra sensor image fusion, medical image fusion and edge detection of multi-focus images etc. The first part of this research work deals with a novel crack detection technique based on Non-Destructive Testing (NDT) for cracks in walls suppressing the diversity and complexity of wall images. It follows different edge tracking algorithms such as Hyperbolic Tangent (HBT) filtering and canny edge detection algorithm. The second part of this research work deals with a novel edge detection approach for multi-focused images by means of complex wavelets based image fusion. An illumination invariant hyperbolic tangent filter (HBT) is applied followed by an adaptive thresholding to get the real edges. The shift invariance and directionally selective diagonal filtering as well as the ease of implementation of Dual-Tree Complex Wavelet Transform (DT-CWT) ensure robust sub band fusion. It helps in avoiding the ringing artefacts that are more pronounced in Discrete Wavelet Transform (DWT). The fusion using DT-CWT also solves the problem of low contrast and blocking effects. In the third part, an improved DT-CWT based image fusion technique has been developed to compose a resultant image with better perceptual as well as quantitative image quality indices. A bilateral sharpness based weighting scheme has been implemented for the high frequency coefficients taking both gradient and its phase coherence in accoun
A directed search algorithm for setting the spectral-spatial quality trade-off of fused images by the wavelet à trous method
This paper proposes a method to determine, in an objective and accurate way, the weighting factor (alfa) to be applied to the detailed panchromatic image information that will be integrated with the background multispectral image information to obtain the "best"; fused image with the same spatial and spectral quality. The fusion method is a weighting variant of the fusion algorithm based on the wavelet transform, calculated using the à trous (WAT) algorithm. The "alfa"; factor is determined, for each band of the multispectral source images using the simulated annealing (SA) search algorithm, which optimizes an objective function (OF) associated with both spatial and spectral quality measures for the fused images. The results obtained have demonstrated that for each one of the spectral bands there is an "alfa"; value that provides fused images with the optimal trade-off between the two qualities for any decomposition level value (n) of the wavelet transform
Signal processing algorithms for enhanced image fusion performance and assessment
The dissertation presents several signal processing algorithms for image fusion in noisy multimodal
conditions. It introduces a novel image fusion method which performs well for image
sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has
no requirements for a priori knowledge of the noise component. The image is decomposed with
Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The
properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic
and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment
methods show favourable performance of the proposed scheme compared to previous efforts
on image fusion, notably in heavily corrupted images.
The approach is further improved by incorporating the advantages of CP with a state-of-the-art
fusion technique named independent component analysis (ICA), for joint-fusion processing
based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to
eliminating high frequency information of the images involved, thereby limiting image sharpness.
Fusion using ICA, on the other hand, performs well in transferring edges and other salient features
of the input images into the composite output. The combination of both methods, coupled with
several mathematical morphological operations in an algorithm fusion framework, is considered a
viable solution. Again, according to the quantitative metrics the results of our proposed approach
are very encouraging as far as joint fusion and denoising are concerned.
Another focus of this dissertation is on a novel metric for image fusion evaluation that is based
on texture. The conservation of background textural details is considered important in many fusion
applications as they help define the image depth and structure, which may prove crucial in
many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process.
This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order
statistical features for the derivation of an image textural measure, which is then used to
replace the edge-based calculations in an objective-based fusion metric. Performance evaluation
on established fusion methods verifies that the proposed metric is viable, especially for multimodal
scenarios
- …