2,189 research outputs found
A Deep Decomposition Network for Image Processing: A Case Study for Visible and Infrared Image Fusion
Image decomposition is a crucial subject in the field of image processing. It
can extract salient features from the source image. We propose a new image
decomposition method based on convolutional neural network. This method can be
applied to many image processing tasks. In this paper, we apply the image
decomposition network to the image fusion task. We input infrared image and
visible light image and decompose them into three high-frequency feature images
and a low-frequency feature image respectively. The two sets of feature images
are fused using a specific fusion strategy to obtain fusion feature images.
Finally, the feature images are reconstructed to obtain the fused image.
Compared with the state-of-the-art fusion methods, this method has achieved
better performance in both subjective and objective evaluation
Fast filtering image fusion
© 2017 SPIE and IS & T. Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the multimodal image gradient magnitude and is filtered by a fast structure-preserving filter. Finally, the fused image is composed by using a weighed-sum rule. Experimental results on several groups of images show that the proposed fast fusion method has a better performance than the state-of-the-art methods, running up to four times faster than the fastest baseline algorithm
Bridging the Gap between Multi-focus and Multi-modal: A Focused Integration Framework for Multi-modal Image Fusion
Multi-modal image fusion (MMIF) integrates valuable information from
different modality images into a fused one. However, the fusion of multiple
visible images with different focal regions and infrared images is a
unprecedented challenge in real MMIF applications. This is because of the
limited depth of the focus of visible optical lenses, which impedes the
simultaneous capture of the focal information within the same scene. To address
this issue, in this paper, we propose a MMIF framework for joint focused
integration and modalities information extraction. Specifically, a
semi-sparsity-based smoothing filter is introduced to decompose the images into
structure and texture components. Subsequently, a novel multi-scale operator is
proposed to fuse the texture components, capable of detecting significant
information by considering the pixel focus attributes and relevant data from
various modal images. Additionally, to achieve an effective capture of scene
luminance and reasonable contrast maintenance, we consider the distribution of
energy information in the structural components in terms of multi-directional
frequency variance and information entropy. Extensive experiments on existing
MMIF datasets, as well as the object detection and depth estimation tasks,
consistently demonstrate that the proposed algorithm can surpass the
state-of-the-art methods in visual perception and quantitative evaluation. The
code is available at https://github.com/ixilai/MFIF-MMIF.Comment: Accepted to IEEE/CVF Winter Conference on Applications of Computer
Vision (WACV) 202
- …