10,772 research outputs found

    A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering

    Get PDF
    abstract: The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods

    SUPERVISED COUPLED DICTIONARY LEARNING FOR MULTI-FOCUS IMAGE FUSION

    Get PDF
    Among all methods that have tackled the multi-focus image fusion problem, where a set of multi-focus input images are fused into a single all-in-focus image, the sparse representation based fusion methods are proved to be the most effective. Majority of these methods approximate the input images over a single dictionary representing only the focused feature space. However, ignoring the blurred features sets limits on the sparsity of the obtained sparse representations and decreases the precision of the fusion. This work proposes a novel sparsity based fusion method that utilizes a joint pair of dictionaries, representing the focused and blurred features, for the sparse approximation of source images. In our method, more compact sparse representations (obtained by using both features in the sparse approximation), and classification tools (provided by using the two known subspaces (focused and blurred)) are exploited to improve the performance of the existing state of the art fusion methods. In order to achieve the benefits of using a joint pair of dictionaries, a coupled dictionary learning algorithm is developed. It enforces a common sparse representation during the simultaneous learning of two dictionaries, fulfils the correlation between them, and improves the fusion performance. The detailed comparison with the state of the art fusion methods shows the higher efficiency and effectiveness of the proposed method.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency

    Get PDF
    Recently, sparse representation-based (SR) methods have been presented for the fusion of multi-focus images. However, most of them independently consider the local information from each image patch during sparse coding and fusion, giving rise to the spatial artifacts on the fused image. In order to overcome this issue, we present a novel multi-focus image fusion method by jointly considering information from each local image patch as well as its spatial contextual information during the sparse coding and fusion in this paper. Specifically, we employ a robust sparse representation (LR_RSR, for short) model with a Laplacian regularization term on the sparse error matrix in the sparse coding phase, ensuring the local consistency among the spatially-adjacent image patches. In the subsequent fusion process, we define a focus measure to determine the focused and de-focused regions in the multi-focus images by collaboratively employing information from each local image patch as well as those from its 8-connected spatial neighbors. As a result of that, the proposed method is likely to introduce fewer spatial artifacts to the fused image. Moreover, an over-complete dictionary with small atoms that maintains good representation capability, rather than using the input data themselves, is constructed for the LR_RSR model during sparse coding. By doing that, the computational complexity of the proposed fusion method is greatly reduced, while the fusion performance is not degraded and can be even slightly improved. Experimental results demonstrate the validity of the proposed method, and more importantly, it turns out that our LR-RSR algorithm is more computationally efficient than most of the traditional SR-based fusion methods

    Image Fusion via Sparse Regularization with Non-Convex Penalties

    Full text link
    The L1 norm regularized least squares method is often used for finding sparse approximate solutions and is widely used in 1-D signal restoration. Basis pursuit denoising (BPD) performs noise reduction in this way. However, the shortcoming of using L1 norm regularization is the underestimation of the true solution. Recently, a class of non-convex penalties have been proposed to improve this situation. This kind of penalty function is non-convex itself, but preserves the convexity property of the whole cost function. This approach has been confirmed to offer good performance in 1-D signal denoising. This paper demonstrates the aforementioned method to 2-D signals (images) and applies it to multisensor image fusion. The problem is posed as an inverse one and a corresponding cost function is judiciously designed to include two data attachment terms. The whole cost function is proved to be convex upon suitably choosing the non-convex penalty, so that the cost function minimization can be tackled by convex optimization approaches, which comprise simple computations. The performance of the proposed method is benchmarked against a number of state-of-the-art image fusion techniques and superior performance is demonstrated both visually and in terms of various assessment measures
    corecore