4 research outputs found

    HoEnTOA: Holoentropy and Taylor Assisted Optimization based Novel Image Quality Enhancement Algorithm for Multi-Focus Image Fusion 

    Get PDF
    In machine vision as well as image processing applications, multi-focus image fusion strategy carries a prominent exposure. Normally, image fusion is a method of merging of information extracted out of two or more than two source images fused to produce a solitary image, which is much more instructive as well as much suitable for computer processing and visual perception. In this research paper authors have devised a novel image quality enhancement algorithm by fusing multi-focus images, in short, termed as HoEnTOA. Initially, contourlet transform is incorporated to both of the input images for generation of four respective sub-bands of each of input image. After converting into sub-bands further holoentropy along with proposed HoEnTOA is introduced to fuse multi-focus images. Here, the developed HoEnTOA is integration of Taylor series with ASSCA. After fusion, the inverse contourlet transform is incorporated for obtaining last fused image. Thus, the proposed HoEnTOA effectively performs the image fusion and has demonstrated better performance utilizing the five metrics i.e. Root Mean Square Error with a minimum value of 3.687, highest universal quality index value of 0.984, maximum Peak Signal to Noise Ratio of 42.08dB, maximal structural similarity index measurement of 0.943, as well as maximum mutual information of 1.651

    HoEnTOA: Holoentropy and Taylor Assisted Optimization based Novel Image Quality Enhancement Algorithm for Multi-Focus Image Fusion

    Get PDF
    875-886In machine vision as well as image processing applications, multi-focus image fusion strategy carries a prominent exposure. Normally, image fusion is a method of merging of information extracted out of two or more than two source images fused to produce a solitary image, which is much more instructive as well as much suitable for computer processing and visual perception. In this research paper authors have devised a novel image quality enhancement algorithm by fusing multi-focus images, in short, termed as HoEnTOA. Initially, contourlet transform is incorporated to both of the input images for generation of four respective sub-bands of each of input image. After converting into sub-bands further holoentropy along with proposed HoEnTOA is introduced to fuse multi-focus images. Here, the developed HoEnTOA is integration of Taylor series with ASSCA. After fusion, the inverse contourlet transform is incorporated for obtaining last fused image. Thus, the proposed HoEnTOA effectively performs the image fusion and has demonstrated better performance utilizing the five metrics i.e. Root Mean Square Error with a minimum value of 3.687, highest universal quality index value of 0.984, maximum Peak Signal to Noise Ratio of 42.08dB, maximal structural similarity index measurement of 0.943, as well as maximum mutual information of 1.651

    Multi-focus image fusion based on non-negative sparse representation and patch-level consistency rectification

    Get PDF
    Most existing sparse representation-based (SR) fusion methods consider the local information of each image patch independently during fusion. Some spatial artifacts are easily introduced to the fused image. A sliding window technology is often employed by these methods to overcome this issue. However, this comes at the cost of high computational complexity. Alternatively, we come up with a novel multi-focus image fusion method that takes full consideration of the strong correlations among spatially adjacent image patches with NO need for a sliding window. To this end, a non-negative SR model with local consistency constraint (CNNSR) on the representation coefficients is first constructed to encode each image patch. Then a patch-level consistency rectification strategy is presented to merge the input image patches, by which the spatial artifacts in the fused images are greatly reduced. As well, a compact non-negative dictionary is constructed for the CNNSR model. Experimental results demonstrate that the proposed fusion method outperforms some state-of-the art methods. Moreover, the proposed method is computationally efficient, thereby facilitating real-world applications

    Fusi Citra Multi-Fokus Menggunakan Stationary Wavelet Transform dan Himpunan Fuzzy

    Get PDF
    Masalah utama pada fusi citra multi-fokus ialah bagaimana caranya untuk mengekstraksi fitur dari citra sumber dan menggabungkan koefisien tersebut secara akurat sehingga menghasilkan piksel citra yang berkualitas tinggi. Akan tetapi, yang disebut dengan berkualitas tinggi merupakan definisi yang tidak pasti, oleh karena itu teori fuzzy sangat sesuai digunakan untuk menyelesaikan permasalahan tersebut. Tugas akhir ini mengusulkan skema fusi citra multi-fokus yang dapat menggabungkan fitur berkualitas tinggi dari dua citra sumber yang berbeda menjadi satu citra gabungan menggunakan integrasi dari Stationary Wavelet Transform dan Himpunan Fuzzy. Pertama, sumber citra didekomposisi menggunakan Stationary Wavelet Transform (SWT) untuk mendapatkan kumpulan sub-citra dengan fitur rinci yang berbeda. Kedua, Gaussian Membership Function (GMF) dimanfaatkan untuk mendapatkan himpunan fuzzy dari data sub-citra. Ketiga, Local Spatial Frequency (LSF) diaplikasikan untuk mendapatkan fitur lokal sub-citra dari himpunan fuzzy. Akhirnya, aturan fusi dirancang berdasarkan hasil verifikasi konsistensi untuk menggabungkan sub-citra, lalu Inverse Stationary Wavelet Transform (ISWT) diimplementasikan untuk merekonstruksi citra gabungan. Uji coba dilakukan pada 20 pasang citra RGB dan 10 pasang citra grayscale. Berdasarkan hasil uji coba, metode ini dapat menghasilkan citra gabungan yang akurat dengan rata-rata Root Mean Square Error (RMSE) dan Mutual Information (MI) pada citra RGB yaitu 0,1091 dan 9,2625 dan pada citra grayscale yaitu 0,0996 dan 8,4949. ================================================================================= The key issue of multi-focus image fusion is how to accurately extract features from source images and fuse those coefficients to create high-quality image. Nevertheless, the so-called high-quality is an uncertain definition, which is very suitable for fuzzy theory to address this problem. This research proposes multi-focus image fusion scheme which can merge the high-quality coefficients of two different source images into a fused image by the integration of Stationary Wavelet Transform (SWT) and Fuzzy Sets. First, the source images are decomposed by Stationary Wavelet Transform (SWT) to get a set of sub-images with different detailed features. Second, the Gaussian Membership Function (GMF) is utilized to get the fuzzy sets of sub-images data. Third, the Local Spatial Frequency (LSF) is employed to extract the local features of the fuzzy sets. At last, the fusion rule is designed based on consistency verification to fuse the sub-images, and then Inverse Stationary Wavelet Transform (ISWT) is implemented to reconstruct the fused image. The experimental is done to 20 pairs of RGB image and 10 pairs of grayscale image. Based on the experiments, this method generate an accurate fused image with average of Root Mean Square Error (RMSE) and Mutual Information (MI) for RGB images are 0,1091 and 9,2625 and for grayscale images are 0,0996 and 8,4949
    corecore