27 research outputs found
Visible and NIR Image Fusion Algorithm Based on Information Complementarity
Visible and near-infrared(NIR) band sensors provide images that capture
complementary spectral radiations from a scene. And the fusion of the visible
and NIR image aims at utilizing their spectrum properties to enhance image
quality. However, currently visible and NIR fusion algorithms cannot well take
advantage of spectrum properties, as well as lack information complementarity,
which results in color distortion and artifacts. Therefore, this paper designs
a complementary fusion model from the level of physical signals. First, in
order to distinguish between noise and useful information, we use two layers of
the weight-guided filter and guided filter to obtain texture and edge layers,
respectively. Second, to generate the initial visible-NIR complementarity
weight map, the difference maps of visible and NIR are filtered by the
extend-DoG filter. After that, the significant region of NIR night-time
compensation guides the initial complementarity weight map by the arctanI
function. Finally, the fusion images can be generated by the complementarity
weight maps of visible and NIR images, respectively. The experimental results
demonstrate that the proposed algorithm can not only well take advantage of the
spectrum properties and the information complementarity, but also avoid color
unnatural while maintaining naturalness, which outperforms the
state-of-the-art
LadleNet: Translating Thermal Infrared Images to Visible Light Images Using A Scalable Two-stage U-Net
The translation of thermal infrared (TIR) images to visible light (VI) images
presents a challenging task with potential applications spanning various
domains such as TIR-VI image registration and fusion. Leveraging supplementary
information derived from TIR image conversions can significantly enhance model
performance and generalization across these applications. However, prevailing
issues within this field include suboptimal image fidelity and limited model
scalability. In this paper, we introduce an algorithm, LadleNet, based on the
U-Net architecture. LadleNet employs a two-stage U-Net concatenation structure,
augmented with skip connections and refined feature aggregation techniques,
resulting in a substantial enhancement in model performance. Comprising
'Handle' and 'Bowl' modules, LadleNet's Handle module facilitates the
construction of an abstract semantic space, while the Bowl module decodes this
semantic space to yield mapped VI images. The Handle module exhibits
extensibility by allowing the substitution of its network architecture with
semantic segmentation networks, thereby establishing more abstract semantic
spaces to bolster model performance. Consequently, we propose LadleNet+, which
replaces LadleNet's Handle module with the pre-trained DeepLabv3+ network,
thereby endowing the model with enhanced semantic space construction
capabilities. The proposed method is evaluated and tested on the KAIST dataset,
accompanied by quantitative and qualitative analyses. Compared to existing
methodologies, our approach achieves state-of-the-art performance in terms of
image clarity and perceptual quality. The source code will be made available at
https://github.com/Ach-1914/LadleNet/tree/main/
ShadingNet: Image Intrinsics by Fine-Grained Shading Decomposition
In general, intrinsic image decomposition algorithms interpret shading as one
unified component including all photometric effects. As shading transitions are
generally smoother than reflectance (albedo) changes, these methods may fail in
distinguishing strong photometric effects from reflectance variations.
Therefore, in this paper, we propose to decompose the shading component into
direct (illumination) and indirect shading (ambient light and shadows)
subcomponents. The aim is to distinguish strong photometric effects from
reflectance variations. An end-to-end deep convolutional neural network
(ShadingNet) is proposed that operates in a fine-to-coarse manner with a
specialized fusion and refinement unit exploiting the fine-grained shading
model. It is designed to learn specific reflectance cues separated from
specific photometric effects to analyze the disentanglement capability. A
large-scale dataset of scene-level synthetic images of outdoor natural
environments is provided with fine-grained intrinsic image ground-truths. Large
scale experiments show that our approach using fine-grained shading
decompositions outperforms state-of-the-art algorithms utilizing unified
shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD
datasets.Comment: Submitted to International Journal of Computer Vision (IJCV