4 research outputs found

    Image Fusion With Cosparse Analysis Operator

    Full text link
    The paper addresses the image fusion problem, where multiple images captured with different focus distances are to be combined into a higher quality all-in-focus image. Most current approaches for image fusion strongly rely on the unrealistic noise-free assumption used during the image acquisition, and then yield limited robustness in fusion processing. In our approach, we formulate the multi-focus image fusion problem in terms of an analysis sparse model, and simultaneously perform the restoration and fusion of multi-focus images. Based on this model, we propose an analysis operator learning, and define a novel fusion function to generate an all-in-focus image. Experimental evaluations confirm the effectiveness of the proposed fusion approach both visually and quantitatively, and show that our approach outperforms state-of-the-art fusion methods.Comment: 12 pages, 4 figures, 1 table, Submitted to IEEE Signal Processing Letters on December 201

    A Fast Dictionary Learning Method for Coupled Feature Space Learning

    Full text link
    In this letter, we propose a novel computationally efficient coupled dictionary learning method that enforces pairwise correlation between the atoms of dictionaries learned to represent the underlying feature spaces of two different representations of the same signals, e.g., representations in different modalities or representations of the same signals measured with different qualities. The jointly learned correlated feature spaces represented by coupled dictionaries are used in sparse representation based classification, recognition and reconstruction tasks. The presented experimental results show that the proposed coupled dictionary learning method has a significantly lower computational cost. Moreover, the visual presentation of jointly learned dictionaries shows that the pairwise correlations between the corresponding atoms are ensured.Comment: 12 pages, 3 figures, 1 algorith

    Multi-Focus Image Fusion Using Sparse Representation and Coupled Dictionary Learning

    Full text link
    We address the multi-focus image fusion problem, where multiple images captured with different focal settings are to be fused into an all-in-focus image of higher quality. Algorithms for this problem necessarily admit the source image characteristics along with focused and blurred features. However, most sparsity-based approaches use a single dictionary in focused feature space to describe multi-focus images, and ignore the representations in blurred feature space. We propose a multi-focus image fusion approach based on sparse representation using a coupled dictionary. It exploits the observations that the patches from a given training set can be sparsely represented by a couple of overcomplete dictionaries related to the focused and blurred categories of images and that a sparse approximation based on such coupled dictionary leads to a more flexible and therefore better fusion strategy than the one based on just selecting the sparsest representation in the original image estimate. In addition, to improve the fusion performance, we employ a coupled dictionary learning approach that enforces pairwise correlation between atoms of dictionaries learned to represent the focused and blurred feature spaces. We also discuss the advantages of the fusion approach based on coupled dictionary learning, and present efficient algorithms for fusion based on coupled dictionary learning. Extensive experimental comparisons with state-of-the-art multi-focus image fusion algorithms validate the effectiveness of the proposed approach.Comment: 25 pages, 15 figures, 2 tabl

    The bilateral solver for quality estimation based multi-focus image fusion

    Full text link
    In this work, a fast Bilateral Solver for Quality Estimation Based multi-focus Image Fusion method (BS-QEBIF) is proposed. The all-in-focus image is generated by pixel-wise summing up the multi-focus source images with their focus-levels maps as weights. Since the visual quality of an image patch is highly correlated with its focus level, the focus-level maps are preliminarily obtained based on visual quality scores, as pre-estimations. However, the pre-estimations are not ideal. Thus the fast bilateral solver is then adopted to smooth the pre-estimations, and edges in the multi-focus source images can be preserved simultaneously. The edge-preserving smoothed results are utilized as final focus-level maps. Moreover, this work provides a confidence-map solution for the unstable fusion in the focus-level-changed boundary regions. Experiments were conducted on 2525 pairs of source images. The proposed BS-QEBIF outperforms the other 1313 fusion methods objectively and subjectively. The all-in-focus image produced by the proposed method can well maintain the details in the multi-focus source images and does not suffer from any residual errors. Experimental results show that BS-QEBIF can handle the focus-level-changed boundary regions without any blocking, ringing and blurring artifacts
    corecore