8,161 research outputs found

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    An Improved Approach for Contrast Enhancement of Spinal Cord Images based on Multiscale Retinex Algorithm

    Full text link
    This paper presents a new approach for contrast enhancement of spinal cord medical images based on multirate scheme incorporated into multiscale retinex algorithm. The proposed work here uses HSV color space, since HSV color space separates color details from intensity. The enhancement of medical image is achieved by down sampling the original image into five versions, namely, tiny, small, medium, fine, and normal scale. This is due to the fact that the each versions of the image when independently enhanced and reconstructed results in enormous improvement in the visual quality. Further, the contrast stretching and MultiScale Retinex (MSR) techniques are exploited in order to enhance each of the scaled version of the image. Finally, the enhanced image is obtained by combining each of these scales in an efficient way to obtain the composite enhanced image. The efficiency of the proposed algorithm is validated by using a wavelet energy metric in the wavelet domain. Reconstructed image using proposed method highlights the details (edges and tissues), reduces image noise (Gaussian and Speckle) and improves the overall contrast. The proposed algorithm also enhances sharp edges of the tissue surrounding the spinal cord regions which is useful for diagnosis of spinal cord lesions. Elaborated experiments are conducted on several medical images and results presented show that the enhanced medical pictures are of good quality and is found to be better compared with other researcher methods.Comment: 13 pages, 6 figures, International Journal of Imaging and Robotics. arXiv admin note: text overlap with arXiv:1406.571

    Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    Get PDF
    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences

    Image Fusion via Sparse Regularization with Non-Convex Penalties

    Full text link
    The L1 norm regularized least squares method is often used for finding sparse approximate solutions and is widely used in 1-D signal restoration. Basis pursuit denoising (BPD) performs noise reduction in this way. However, the shortcoming of using L1 norm regularization is the underestimation of the true solution. Recently, a class of non-convex penalties have been proposed to improve this situation. This kind of penalty function is non-convex itself, but preserves the convexity property of the whole cost function. This approach has been confirmed to offer good performance in 1-D signal denoising. This paper demonstrates the aforementioned method to 2-D signals (images) and applies it to multisensor image fusion. The problem is posed as an inverse one and a corresponding cost function is judiciously designed to include two data attachment terms. The whole cost function is proved to be convex upon suitably choosing the non-convex penalty, so that the cost function minimization can be tackled by convex optimization approaches, which comprise simple computations. The performance of the proposed method is benchmarked against a number of state-of-the-art image fusion techniques and superior performance is demonstrated both visually and in terms of various assessment measures

    Multi-Sensor Image Fusion Based on Moment Calculation

    Full text link
    An image fusion method based on salient features is proposed in this paper. In this work, we have concentrated on salient features of the image for fusion in order to preserve all relevant information contained in the input images and tried to enhance the contrast in fused image and also suppressed noise to a maximum extent. In our system, first we have applied a mask on two input images in order to conserve the high frequency information along with some low frequency information and stifle noise to a maximum extent. Thereafter, for identification of salience features from sources images, a local moment is computed in the neighborhood of a coefficient. Finally, a decision map is generated based on local moment in order to get the fused image. To verify our proposed algorithm, we have tested it on 120 sensor image pairs collected from Manchester University UK database. The experimental results show that the proposed method can provide superior fused image in terms of several quantitative fusion evaluation index.Comment: 5 pages, International Conferenc
    • …
    corecore