356 research outputs found

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    Multiresolution based, multisensor, multispectral image fusion

    Get PDF
    Spaceborne sensors, which collect imagery of the Earth in various spectral bands, are limited by the data transmission rates. As a result the multispectral bands are transmitted at a lower resolution and only the panchromatic band is transmitted at its full resolution. The information contained in the multispectral bands is an invaluable tool for land use mapping, urban feature extraction, etc. However, the limited spatial resolution reduces the appeal and value of this information. Pan sharpening techniques enhance the spatial resolution of the multispectral imagery by extracting the high spatial resolution of the panchromatic band and adding it to the multispectral images. There are many different pan sharpening methods available like the ones based on the Intensity-Hue-Saturation and the Principal Components Analysis transformation. But these methods cause heavy spectral distortion of the multispectral images. This is a drawback if the pan sharpened images are to be used for classification based applications. In recent years, multiresolution based techniques have received a lot of attention since they preserve the spectral fidelity in the pan sharpened images. Many variations of the multiresolution based techniques exist. They differ based on the transform used to extract the high spatial resolution information from the images and the rules used to synthesize the pan sharpened image. The superiority of many of the techniques has been demonstrated by comparing them with fairly simple techniques like the Intensity-Hue-Saturation or the Principal Components Analysis. Therefore there is much uncertainty in the pan sharpening community as to which technique is the best at preserving the spectral fidelity. This research investigates these variations in order to find an answer to this question. An important parameter of the multiresolution based methods is the number of decomposition levels to be applied. It is found that the number of decomposition levels affects both the spatial and spectral quality of the pan sharpened images. The minimum number of decomposition levels required to fuse the multispectral and panchromatic images was determined in this study for image pairs with different resolution ratios and recommendations are made accordingly

    Multiresolution based, multisensor, multispectral image fusion

    Get PDF
    Spaceborne sensors, which collect imagery of the Earth in various spectral bands, are limited by the data transmission rates. As a result the multispectral bands are transmitted at a lower resolution and only the panchromatic band is transmitted at its full resolution. The information contained in the multispectral bands is an invaluable tool for land use mapping, urban feature extraction, etc. However, the limited spatial resolution reduces the appeal and value of this information. Pan sharpening techniques enhance the spatial resolution of the multispectral imagery by extracting the high spatial resolution of the panchromatic band and adding it to the multispectral images. There are many different pan sharpening methods available like the ones based on the Intensity-Hue-Saturation and the Principal Components Analysis transformation. But these methods cause heavy spectral distortion of the multispectral images. This is a drawback if the pan sharpened images are to be used for classification based applications. In recent years, multiresolution based techniques have received a lot of attention since they preserve the spectral fidelity in the pan sharpened images. Many variations of the multiresolution based techniques exist. They differ based on the transform used to extract the high spatial resolution information from the images and the rules used to synthesize the pan sharpened image. The superiority of many of the techniques has been demonstrated by comparing them with fairly simple techniques like the Intensity-Hue-Saturation or the Principal Components Analysis. Therefore there is much uncertainty in the pan sharpening community as to which technique is the best at preserving the spectral fidelity. This research investigates these variations in order to find an answer to this question. An important parameter of the multiresolution based methods is the number of decomposition levels to be applied. It is found that the number of decomposition levels affects both the spatial and spectral quality of the pan sharpened images. The minimum number of decomposition levels required to fuse the multispectral and panchromatic images was determined in this study for image pairs with different resolution ratios and recommendations are made accordingly

    Data fusion for NDE signal characterization

    Get PDF
    The primary objective of multi-sensor data fusion, which offers both quantitative and qualitative benefits, is to be able to draw inferences that may not be feasible with data from a single sensor alone. In this study, data from two sets of sensors are fused to estimate the defect profile from magnetic flux leakage (MFL) inspection data. The two sensors measure the axial and circumferential components of the MFL field. Data is fused at the signal level. The two signals are combined as the real and imaginary components of a complex valued signal. Signals from an array of sensors are arranged in contiguous rows to obtain a complex valued image. Signals from the defect regions are then processed to minimize noise and the effects of lift-off. A boundary extraction algorithm is used not only to estimate the defect size more accurately, but also to segment the defect area. A wavelet basis function neural network (WBFNN) is then employed to map the complex valued image appropriately to obtain the geometric profile of the defect. The feasibility of the approach was evaluated using the data obtained from the MFL inspection of natural gas transmission pipelines. The results obtained by fusing the axial and circumferential component appear to be better than those obtained using the axial component alone. Finally, a WBFNN based boundary extraction scheme is employed for the proposed fusion approach. The boundary based adaptive weighted average (BBAWA) offers superior performance compared to three alternative different fusion methods employing weighted average (WA), principal component analysis (PCA), and adaptive weighted average (AWA) methods

    Probabilistic modeling and statistical inference for random fields and space-time processes

    Get PDF
    Author from publisher's list. Cover title.Final report for ONR Grant N00014-91-J-100

    Multi-Modal Enhancement Techniques for Visibility Improvement of Digital Images

    Get PDF
    Image enhancement techniques for visibility improvement of 8-bit color digital images based on spatial domain, wavelet transform domain, and multiple image fusion approaches are investigated in this dissertation research. In the category of spatial domain approach, two enhancement algorithms are developed to deal with problems associated with images captured from scenes with high dynamic ranges. The first technique is based on an illuminance-reflectance (I-R) model of the scene irradiance. The dynamic range compression of the input image is achieved by a nonlinear transformation of the estimated illuminance based on a windowed inverse sigmoid transfer function. A single-scale neighborhood dependent contrast enhancement process is proposed to enhance the high frequency components of the illuminance, which compensates for the contrast degradation of the mid-tone frequency components caused by dynamic range compression. The intensity image obtained by integrating the enhanced illuminance and the extracted reflectance is then converted to a RGB color image through linear color restoration utilizing the color components of the original image. The second technique, named AINDANE, is a two step approach comprised of adaptive luminance enhancement and adaptive contrast enhancement. An image dependent nonlinear transfer function is designed for dynamic range compression and a multiscale image dependent neighborhood approach is developed for contrast enhancement. Real time processing of video streams is realized with the I-R model based technique due to its high speed processing capability while AINDANE produces higher quality enhanced images due to its multi-scale contrast enhancement property. Both the algorithms exhibit balanced luminance, contrast enhancement, higher robustness, and better color consistency when compared with conventional techniques. In the transform domain approach, wavelet transform based image denoising and contrast enhancement algorithms are developed. The denoising is treated as a maximum a posteriori (MAP) estimator problem; a Bivariate probability density function model is introduced to explore the interlevel dependency among the wavelet coefficients. In addition, an approximate solution to the MAP estimation problem is proposed to avoid the use of complex iterative computations to find a numerical solution. This relatively low complexity image denoising algorithm implemented with dual-tree complex wavelet transform (DT-CWT) produces high quality denoised images

    Multiscale Medical Image Fusion in Wavelet Domain

    Get PDF
    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach

    Development and implementation of image fusion algorithms based on wavelets

    Get PDF
    Image fusion is a process of blending the complementary as well as the common features of a set of images, to generate a resultant image with superior information content in terms of subjective as well as objective analysis point of view. The objective of this research work is to develop some novel image fusion algorithms and their applications in various fields such as crack detection, multi spectra sensor image fusion, medical image fusion and edge detection of multi-focus images etc. The first part of this research work deals with a novel crack detection technique based on Non-Destructive Testing (NDT) for cracks in walls suppressing the diversity and complexity of wall images. It follows different edge tracking algorithms such as Hyperbolic Tangent (HBT) filtering and canny edge detection algorithm. The second part of this research work deals with a novel edge detection approach for multi-focused images by means of complex wavelets based image fusion. An illumination invariant hyperbolic tangent filter (HBT) is applied followed by an adaptive thresholding to get the real edges. The shift invariance and directionally selective diagonal filtering as well as the ease of implementation of Dual-Tree Complex Wavelet Transform (DT-CWT) ensure robust sub band fusion. It helps in avoiding the ringing artefacts that are more pronounced in Discrete Wavelet Transform (DWT). The fusion using DT-CWT also solves the problem of low contrast and blocking effects. In the third part, an improved DT-CWT based image fusion technique has been developed to compose a resultant image with better perceptual as well as quantitative image quality indices. A bilateral sharpness based weighting scheme has been implemented for the high frequency coefficients taking both gradient and its phase coherence in accoun
    corecore