18 research outputs found
An Improved Infrared/Visible Fusion for Astronomical Images
An undecimated dual tree complex wavelet transform (UDTCWT) based fusion scheme for astronomical visible/IR images is developed. The UDTCWT reduces noise effects and improves object classification due to its inherited shift invariance property. Local standard deviation and distance transforms are used to extract useful information (especially small objects). Simulation results compared with the state-of-the-art fusion techniques illustrate the superiority of proposed scheme in terms of accuracy for most of the cases
End-to-End Learning for Simultaneously Generating Decision Map and Multi-Focus Image Fusion Result
The general aim of multi-focus image fusion is to gather focused regions of
different images to generate a unique all-in-focus fused image. Deep learning
based methods become the mainstream of image fusion by virtue of its powerful
feature representation ability. However, most of the existing deep learning
structures failed to balance fusion quality and end-to-end implementation
convenience. End-to-end decoder design often leads to unrealistic result
because of its non-linear mapping mechanism. On the other hand, generating an
intermediate decision map achieves better quality for the fused image, but
relies on the rectification with empirical post-processing parameter choices.
In this work, to handle the requirements of both output image quality and
comprehensive simplicity of structure implementation, we propose a cascade
network to simultaneously generate decision map and fused result with an
end-to-end training procedure. It avoids the dependence on empirical
post-processing methods in the inference stage. To improve the fusion quality,
we introduce a gradient aware loss function to preserve gradient information in
output fused image. In addition, we design a decision calibration strategy to
decrease the time consumption in the application of multiple images fusion.
Extensive experiments are conducted to compare with 19 different
state-of-the-art multi-focus image fusion structures with 6 assessment metrics.
The results prove that our designed structure can generally ameliorate the
output fused image quality, while implementation efficiency increases over 30\%
for multiple images fusion.Comment: repor
Bounded PCA based Multi Sensor Image Fusion Employing Curvelet Transform Coefficients
The fusion of thermal and visible images acts as an important device for target detection. The quality of the spectral content of the fused image improves with wavelet-based image fusion. However, compared to PCA-based fusion, most wavelet-based methods provide results with a lower spatial resolution. The outcome gets better when the two approaches are combined, but they may still be refined. Compared to wavelets, the curvelet transforms more accurately depict the edges in the image. Enhancing the edges is a smart way to improve spatial resolution and the edges are crucial for interpreting the images. The fusion technique that utilizes curvelets enables the provision of additional data in both spectral and spatial areas concurrently. In this paper, we employ an amalgamation of Curvelet Transform and a Bounded PCA (CTBPCA) method to fuse thermal and visible images. To evidence the enhanced efficiency of our proposed technique, multiple evaluation metrics and comparisons with existing image merging methods are employed. Our approach outperforms others in both qualitative and quantitative analysis, except for runtime performance. Future Enhancement-The study will be based on using the fused image for target recognition. Future work should also focus on this method’s continued improvement and optimization for real-time video processing
Construction of all-in-focus images assisted by depth sensing
Multi-focus image fusion is a technique for obtaining an all-in-focus image
in which all objects are in focus to extend the limited depth of field (DoF) of
an imaging system. Different from traditional RGB-based methods, this paper
presents a new multi-focus image fusion method assisted by depth sensing. In
this work, a depth sensor is used together with a color camera to capture
images of a scene. A graph-based segmentation algorithm is used to segment the
depth map from the depth sensor, and the segmented regions are used to guide a
focus algorithm to locate in-focus image blocks from among multi-focus source
images to construct the reference all-in-focus image. Five test scenes and six
evaluation metrics were used to compare the proposed method and representative
state-of-the-art algorithms. Experimental results quantitatively demonstrate
that this method outperforms existing methods in both speed and quality (in
terms of comprehensive fusion metrics). The generated images can potentially be
used as reference all-in-focus images.Comment: 18 pages. This paper has been submitted to Computer Vision and Image
Understandin