47 research outputs found

    Pansharpening of High and Medium Resolution Satellite Images Using Bilateral Filtering

    Get PDF
    We provide and evaluate a fusion algorithm of remotely sensed images, i.e. the fusion of a panchromatic (PAN) image with a multi-spectral (MS) image using bilateral filtering, applied to images of three different sensors: SPOT 5, Landsat ETM+ and Quickbird. To assess the fusion process, we use six quality indexes, that confirm, along with visual analysis, good overall results for the three sensors

    Development and implementation of image fusion algorithms based on wavelets

    Get PDF
    Image fusion is a process of blending the complementary as well as the common features of a set of images, to generate a resultant image with superior information content in terms of subjective as well as objective analysis point of view. The objective of this research work is to develop some novel image fusion algorithms and their applications in various fields such as crack detection, multi spectra sensor image fusion, medical image fusion and edge detection of multi-focus images etc. The first part of this research work deals with a novel crack detection technique based on Non-Destructive Testing (NDT) for cracks in walls suppressing the diversity and complexity of wall images. It follows different edge tracking algorithms such as Hyperbolic Tangent (HBT) filtering and canny edge detection algorithm. The second part of this research work deals with a novel edge detection approach for multi-focused images by means of complex wavelets based image fusion. An illumination invariant hyperbolic tangent filter (HBT) is applied followed by an adaptive thresholding to get the real edges. The shift invariance and directionally selective diagonal filtering as well as the ease of implementation of Dual-Tree Complex Wavelet Transform (DT-CWT) ensure robust sub band fusion. It helps in avoiding the ringing artefacts that are more pronounced in Discrete Wavelet Transform (DWT). The fusion using DT-CWT also solves the problem of low contrast and blocking effects. In the third part, an improved DT-CWT based image fusion technique has been developed to compose a resultant image with better perceptual as well as quantitative image quality indices. A bilateral sharpness based weighting scheme has been implemented for the high frequency coefficients taking both gradient and its phase coherence in accoun

    An Efficient Algorithm for Multimodal Medical Image Fusion based on Feature Selection and PCA Using DTCWT (FSPCA-DTCWT)

    Get PDF
    Background: During the two past decades, medical image fusion has become an essential part ofmodern medicine due to the availability of numerous imaging modalities (e.g., MRI, CT, SPECT,etc.). This paper presents a new medical image fusion algorithm based on PCA and DTCWT,which uses different fusion rules to obtain a new image containing more information than any ofthe input images.Methods: A new image fusion algorithm improves the visual quality of the fused image, based onfeature selection and Principal Component Analysis (PCA) in the Dual-Tree Complex WaveletTransform (DTCWT) domain. It is called Feature Selection with Principal Component Analysisand Dual-Tree Complex Wavelet Transform (FSPCA-DTCWT). Using different fusion rules in asingle algorithm result in correctly reconstructed image (fused image), this combination willproduce a new technique, which employs the advantages of each method separately. The DTCWTpresents good directionality since it considers the edge information in six directions and providesapproximate shift invariant. The main goal of PCA is to extract the most significant characteristics(represented by the wavelet coefficients) in order to improve the spatial resolution. The proposedalgorithm fuses the detailed wavelet coefficients of input images using features selection rule.Results: Several experiments have been conducted over different sets of multimodal medicalimages such as CT/MRI, MRA/T1-MRI. However, due to pages-limit on a paper, only results ofthree sets have been presented. The FSPCA-DTCWT algorithm is compared to recent fusionmethods presented in the literature (eight methods) in terms of visual quality and quantitativelyusing well-known fusion performance metrics (five metrics). Results showed that the proposedalgorithm outperforms the existing ones regarding visual and quantitative evaluations.Conclusion: This paper focuses on medical image fusion of different modalities. A novel imagefusion algorithm based on DTCWT to merge multimodal medical images has been proposed.Experiments have been performed using two different sets of multimodal medical images. Theresults show that the proposed fusion method significantly outperforms the recent fusiontechniques reported in the literature

    Joint demosaicing and fusion of multiresolution coded acquisitions: A unified image formation and reconstruction method

    Full text link
    Novel optical imaging devices allow for hybrid acquisition modalities such as compressed acquisitions with locally different spatial and spectral resolutions captured by a single focal plane array. In this work, we propose to model the capturing system of a multiresolution coded acquisition (MRCA) in a unified framework, which natively includes conventional systems such as those based on spectral/color filter arrays, compressed coded apertures, and multiresolution sensing. We also propose a model-based image reconstruction algorithm performing a joint demosaicing and fusion (JoDeFu) of any acquisition modeled in the MRCA framework. The JoDeFu reconstruction algorithm solves an inverse problem with a proximal splitting technique and is able to reconstruct an uncompressed image datacube at the highest available spatial and spectral resolution. An implementation of the code is available at https://github.com/danaroth83/jodefu.Comment: 15 pages, 7 figures; regular pape

    Super Resolution of Wavelet-Encoded Images and Videos

    Get PDF
    In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution. Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands. Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    A New Variational Approach Based on Proximal Deep Injection and Gradient Intensity Similarity for Spatio-Spectral Image Fusion

    Get PDF
    Pansharpening is a very debated spatio-spectral fusion problem. It refers to the fusion of a high spatial resolution panchromatic image with a lower spatial but higher spectral resolution multispectral image in order to obtain an image with high resolution in both the domains. In this article, we propose a novel variational optimization-based (VO) approach to address this issue incorporating the outcome of a deep convolutional neural network (DCNN). This solution can take advantages of both the paradigms. On one hand, higher performance can be expected introducing machine learning (ML) methods based on the training by examples philosophy into VO approaches. On other hand, the combination of VO techniques with DCNNs can aid the generalization ability of these latter. In particular, we formulate a 2\ell _2 -based proximal deep injection term to evaluate the distance between the DCNN outcome, and the desired high spatial resolution multispectral image. This represents the regularization term for our VO model. Furthermore, a new data fitting term measuring the spatial fidelity is proposed. Finally, the proposed convex VO problem is efficiently solved by exploiting the framework of the alternating direction method of multipliers (ADMM), thus guaranteeing the convergence of the algorithm. Extensive experiments both on simulated, and real datasets demonstrate that the proposed approach can outperform state-of-the-art spatio-spectral fusion methods, even showing a significant generalization ability. Please find the project page at https://liangjiandeng.github.io/Projects_Res/DMPIF_2020jstars.html
    corecore