49 research outputs found

    Multi-Sensor Image Fusion Based on Moment Calculation

    Full text link
    An image fusion method based on salient features is proposed in this paper. In this work, we have concentrated on salient features of the image for fusion in order to preserve all relevant information contained in the input images and tried to enhance the contrast in fused image and also suppressed noise to a maximum extent. In our system, first we have applied a mask on two input images in order to conserve the high frequency information along with some low frequency information and stifle noise to a maximum extent. Thereafter, for identification of salience features from sources images, a local moment is computed in the neighborhood of a coefficient. Finally, a decision map is generated based on local moment in order to get the fused image. To verify our proposed algorithm, we have tested it on 120 sensor image pairs collected from Manchester University UK database. The experimental results show that the proposed method can provide superior fused image in terms of several quantitative fusion evaluation index.Comment: 5 pages, International Conferenc

    Improving fusion of surveillance images in sensor networks using independent component analysis

    Get PDF

    Region-based multimodal image fusion using ICA bases

    Get PDF

    BigFUSE: Global Context-Aware Image Fusion in Dual-View Light-Sheet Fluorescence Microscopy with Image Formation Prior

    Full text link
    Light-sheet fluorescence microscopy (LSFM), a planar illumination technique that enables high-resolution imaging of samples, experiences defocused image quality caused by light scattering when photons propagate through thick tissues. To circumvent this issue, dualview imaging is helpful. It allows various sections of the specimen to be scanned ideally by viewing the sample from opposing orientations. Recent image fusion approaches can then be applied to determine in-focus pixels by comparing image qualities of two views locally and thus yield spatially inconsistent focus measures due to their limited field-of-view. Here, we propose BigFUSE, a global context-aware image fuser that stabilizes image fusion in LSFM by considering the global impact of photon propagation in the specimen while determining focus-defocus based on local image qualities. Inspired by the image formation prior in dual-view LSFM, image fusion is considered as estimating a focus-defocus boundary using Bayes Theorem, where (i) the effect of light scattering onto focus measures is included within Likelihood; and (ii) the spatial consistency regarding focus-defocus is imposed in Prior. The expectation-maximum algorithm is then adopted to estimate the focus-defocus boundary. Competitive experimental results show that BigFUSE is the first dual-view LSFM fuser that is able to exclude structured artifacts when fusing information, highlighting its abilities of automatic image fusion.Comment: paper in MICCAI 202

    Real-MFF: A Large Realistic Multi-focus Image Dataset with Ground Truth

    Get PDF
    Multi-focus image fusion, a technique to generate an all-in-focus image from two or more partially-focused source images, can benefit many computer vision tasks. However, currently there is no large and realistic dataset to perform convincing evaluation and comparison of algorithms in multi-focus image fusion. Moreover, it is difficult to train a deep neural network for multi-focus image fusion without a suitable dataset. In this letter, we introduce a large and realistic multi-focus dataset called Real-MFF, which contains 710 pairs of source images with corresponding ground truth images. The dataset is generated by light field images, and both the source images and the ground truth images are realistic. To serve as both a well-established benchmark for existing multi-focus image fusion algorithms and an appropriate training dataset for future development of deep-learning-based methods, the dataset contains a variety of scenes, including buildings, plants, humans, shopping malls, squares and so on. We also evaluate 10 typical multi-focus algorithms on this dataset for the purpose of illustration

    Image Fusion with Contrast Improving and Feature Preserving

    Get PDF
    The goal of image fusion is to obtain a fused image that contains most significant information in all input images which were captured by different sensors from the same scene. In particular, the fusion process should improve the contrast and keep the integrity of significant features from input images. In this paper, we propose a region-based image fusion method to fuse spatially registered visible and infrared images while improving the contrast and preserving the significant features of input images. At first, the proposed method decomposes input images into base layers and detail layers using a bilateral filter. Then the base layers of the input images are segmented into regions. Third, a region-based decision map is proposed to represent the importance of every region. The decision map is obtained by calculating the weights of regions according to the gray-level difference between each region and its neighboring regions in the base layers. At last, the detail layers and the base layers are separately fused by different fusion rules based on the same decision map to generate a final fused image. Experimental results qualitatively and quantitatively demonstrate that the proposed method can improve the contrast of fused images and preserve more features of input images than several previous image fusion methods
    corecore