3 research outputs found
ΠΡΠ΄Π²ΠΈΡΠ΅Π½Π½Ρ Π΅ΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΡ ΠΎΠΏΡΠΈΠΊΠΎ-Π΅Π»Π΅ΠΊΡΡΠΎΠ½Π½ΠΈΡ ΡΠΈΡΡΠ΅ΠΌ ΡΠ»ΡΡ ΠΎΠΌ ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΡΠ²Π°Π½Π½Ρ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ
ΠΠΊΡΡΠ°Π»ΡΠ½ΡΡΡΡ. Π Π΄Π°Π½ΠΈΠΉ ΡΠ°Ρ Π²Π°ΠΆΠ»ΠΈΠ²ΠΈΠΌ Π·Π°Π²Π΄Π°Π½Π½ΡΠΌ Ρ Π²ΠΈΡΠ²Π»Π΅Π½Π½Ρ ΡΠ° ΡΠΎΠ·ΠΏΡΠ·Π½Π°Π²Π°Π½Π½Ρ ΡΡΠ»Π΅ΠΉ. ΠΠΎΠΌΠΏΠ»Π΅ΠΊΡΡΠ²Π°Π½Π½Ρ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½Ρ ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ ΡΡΡΠΎΡΠ½Ρ ΡΠΊΡΡΠ½Ρ Ρ ΠΊΡΠ»ΡΠΊΡΡΠ½Ρ Π²ΠΈΠ³ΠΎΠ΄Ρ Ρ Π²ΠΈΡΡΡΠ΅Π½Π½Ρ Π·Π°Π΄Π°Ρ Π²ΠΈΡΠ²Π»Π΅Π½Π½Ρ, ΡΠΎΠ·ΡΡΠ·Π½Π΅Π½Π½Ρ, ΡΠΎΠ·ΠΏΡΠ·Π½Π°Π²Π°Π½Π½Ρ, ΡΡΠ΅ΠΆΠ΅Π½Π½Ρ ΡΠ° ΡΡΠ»Π΅Π²ΠΊΠ°Π·Π°Π½Π½Ρ. ΠΠΎΠΌΠΏΠ»Π΅ΠΊΡΡΠ²Π°Π½Π½Ρ Π΄Π°Ρ Π·ΠΌΠΎΠ³Ρ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ Π±ΡΠ»ΡΡ ΡΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠ²Π½Π΅ ΡΠ΅Π·ΡΠ»ΡΡΡΡΡΠ΅ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½Ρ, Π½ΡΠΆ Π²ΡΠ΄ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ, ΡΠΎ ΠΎΡΡΠΈΠΌΠ°Π½Ρ ΠΎΠΊΡΠ΅ΠΌΠΎ, ΠΊΠΎΠΆΠ½Π΅ ΡΠ²ΠΎΡΠΌ ΠΊΠ°Π½Π°Π»ΠΎΠΌ. Π¦Π΅ Π·Π½Π°ΡΠ½ΠΎ ΠΏΠΎΠΊΡΠ°ΡΡΡ ΡΠΊΡΡΡ ΡΠΎΠ±ΠΎΡΠΈ ΠΎΠΏΠ΅ΡΠ°ΡΠΎΡΠ°, ΡΠΎ ΠΏΡΠ°ΡΡΡ Π· Π½ΠΈΠΌ.
Π’ΠΎΠΌΡ Ρ Π°ΠΊΡΡΠ°Π»ΡΠ½ΠΈΠΌ ΠΏΡΠ΄Π²ΠΈΡΡΠ²Π°ΡΠΈ Π΅ΡΠ΅ΠΊΡΠΈΠ²Π½ΡΡΡΡ ΡΡΠ½ΡΡΡΠΈΡ
ΠΌΠ΅ΡΠΎΠ΄ΡΠ² ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡΡΠ²Π°Π½Π½Ρ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ, ΡΠΎ Π±ΡΠ»ΠΈ ΠΎΡΡΠΈΠΌΠ°Π½Ρ Π· ΡΡΠ·Π½ΠΈΡ
ΠΊΠ°Π½Π°Π»ΡΠ².
ΠΠ΅ΡΠ° Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ: ΠΠΎΠΊΡΠ°ΡΠΈΡΠΈ ΡΠΏΠΎΠΆΠΈΠ²ΡΡ ΡΠΊΠΎΡΡΡ ΠΎΠΏΡΠΈΠΊΠΎ-Π΅Π»Π΅ΠΊΡΡΠΎΠ½Π½ΠΈΡ
ΡΠΈΡΡΠ΅ΠΌ ΡΠΏΠΎΡΡΠ΅ΡΠ΅ΠΆΠ΅Π½Π½Ρ.
ΠΠ°Π²Π΄Π°Π½Π½Ρ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ:
1. ΠΠ³Π»ΡΠ½ΡΡΠΈ Π½Π°ΠΉΠΏΠΎΠΏΡΠ»ΡΡΠ½ΡΡΡ ΡΡΡΠ΅Π½Π½Ρ Π·Π»ΠΈΡΡΡ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Ρ;
2. ΠΠ±ΡΠ°ΡΠΈ ΠΎΠ΄ΠΈΠ½ Π· ΠΌΠ΅ΡΠΎΠ΄ΡΠ² ΡΠ° Π·Π°ΠΏΡΠΎΠΏΠΎΠ½ΡΠ²Π°ΡΠΈ ΠΉΠΎΠ³ΠΎ ΠΏΠΎΠΊΡΠ°ΡΠ΅Π½Π½Ρ;
3. ΠΠΎΡΠ»ΡΠ΄ΠΈΡΠΈ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠΈ ΠΏΠΎΠΊΡΠ°ΡΠ΅Π½Π½Ρ Π·Π°ΠΏΡΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½ΠΈΡ
ΠΌΠ΅ΡΠΎΠ΄ΡΠ².
ΠΠ±'ΡΠΊΡ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ: ΠΠΏΡΠΈΠΊΠΎ Π΅Π»Π΅ΠΊΡΡΠΎΠ½Π½Π° ΡΠΈΡΡΠ΅ΠΌΠ° Π· Π΄Π²ΠΎΠΌΠ° ΡΠΏΠ΅ΠΊΡΡΠ°Π»ΡΠ½ΠΈΠΌΠΈ ΠΊΠ°Π½Π°Π»Π°ΠΌΠΈ.
ΠΡΠ΅Π΄ΠΌΠ΅Ρ Π΄ΠΎΡΠ»ΡΠ΄ΠΆΠ΅Π½Π½Ρ: ΠΡΠ΄Π²ΠΈΡΠ΅Π½Π½Ρ ΠΉΠΌΠΎΠ²ΡΡΠ½ΠΎΡΡΡ Π²ΠΈΡΠ²Π»Π΅Π½Π½Ρ ΠΎΠ±'ΡΠΊΡΡΠ² Π² Π΄Π²ΠΎΡ
ΠΊΠ°Π½Π°Π»ΡΠ½ΠΈΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
.Topic relevance. Nowadays, the essential task is the identification and recognition purposes. Composition of the images showed significant qualitative and quantitative benefits in solving the problems of detection, differentiation, recognition, tracking and targeting. Compilation allows you to get a more informative resultant image than from the images taken separately, each with its own channel.
That why, it is important to increase the efficiency of existing methods of image fusion, which were obtained from various channels.
Research goal: Improve the quality of consumer optoelectronic surveillance systems.
Research objectives:
1. Explore the most popular images fusion solution;
2. Choose one of the methods and suggest improvements;
3. Explore the proposed methods to improve results.
Object of research: Optoelectronic system with two spectral channels.
Subject of research: Increased probability of detection of a dual system
A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform
Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts.
The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified