2 research outputs found
Hardware-Efficient Guided Image Filtering For Multi-Label Problem
The Guided Filter (GF) is well-known for its linear complexity. However, when
filtering an image with an n-channel guidance, GF needs to invert an n x n
matrix for each pixel. To the best of our knowledge existing matrix inverse
algorithms are inefficient on current hardwares. This shortcoming limits
applications of multichannel guidance in computation intensive system such as
multi-label system. We need a new GF-like filter that can perform fast
multichannel image guided filtering. Since the optimal linear complexity of GF
cannot be minimized further, the only way thus is to bring all potentialities
of current parallel computing hardwares into full play. In this paper we
propose a hardware-efficient Guided Filter (HGF), which solves the efficiency
problem of multichannel guided image filtering and yields competent results
when applying it to multi-label problems with synthesized polynomial
multichannel guidance. Specifically, in order to boost the filtering
performance, HGF takes a new matrix inverse algorithm which only involves two
hardware-efficient operations: element-wise arithmetic calculations and box
filtering. In order to break the linear model restriction, HGF synthesizes a
polynomial multichannel guidance to introduce nonlinearity. Benefiting from our
polynomial guidance and hardware-efficient matrix inverse algorithm, HGF not
only is more sensitive to the underlying structure of guidance but also
achieves the fastest computing speed. Due to these merits, HGF obtains
state-of-the-art results in terms of accuracy and efficiency in the computation
intensive multi-labe
Fast and Efficient Zero-Learning Image Fusion
We propose a real-time image fusion method using pre-trained neural networks.
Our method generates a single image containing features from multiple sources.
We first decompose images into a base layer representing large scale intensity
variations, and a detail layer containing small scale changes. We use visual
saliency to fuse the base layers, and deep feature maps extracted from a
pre-trained neural network to fuse the detail layers. We conduct ablation
studies to analyze our method's parameters such as decomposition filters,
weight construction methods, and network depth and architecture. Then, we
validate its effectiveness and speed on thermal, medical, and multi-focus
fusion. We also apply it to multiple image inputs such as multi-exposure
sequences. The experimental results demonstrate that our technique achieves
state-of-the-art performance in visual quality, objective assessment, and
runtime efficiency.Comment: 13 pages, 10 figure