3 research outputs found

    ACCURATE REGISTRATION OF THE CHANG’E-1 IIM DATA BASED ON LRO LROC-WAC MOSICA DATA

    Get PDF

    ΠŸΡ–Π΄Π²ΠΈΡ‰Π΅Π½Π½Ρ СфСктивності ΠΎΠΏΡ‚ΠΈΠΊΠΎ-Π΅Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Π½ΠΈΡ… систСм ΡˆΠ»ΡΡ…ΠΎΠΌ комплСксування Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΡŒ

    Get PDF
    ΠΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½Ρ–ΡΡ‚ΡŒ. Π’ Π΄Π°Π½ΠΈΠΉ час Π²Π°ΠΆΠ»ΠΈΠ²ΠΈΠΌ завданням Ρ” виявлСння Ρ‚Π° розпізнавання Ρ†Ρ–Π»Π΅ΠΉ. ΠšΠΎΠΌΠΏΠ»Π΅ΠΊΡΡƒΠ²Π°Π½Π½Ρ Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½Π½ΡŒ ΠΏΠΎΠΊΠ°Π·Π°Π»ΠΈ істотну якісну Ρ– ΠΊΡ–Π»ΡŒΠΊΡ–ΡΠ½Ρƒ Π²ΠΈΠ³ΠΎΠ΄Ρƒ Ρƒ Π²ΠΈΡ€Ρ–ΡˆΠ΅Π½Π½Ρ– Π·Π°Π΄Π°Ρ‡ виявлСння, розрізнСння, розпізнавання, стСТСння Ρ‚Π° цілСвказання. ΠšΠΎΠΌΠΏΠ»Π΅ΠΊΡΡƒΠ²Π°Π½Π½Ρ Π΄Π°Ρ” Π·ΠΌΠΎΠ³Ρƒ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ Π±Ρ–Π»ΡŒΡˆ Ρ–Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΠ²Π½Π΅ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚ΡƒΡŽΡ‡Π΅ зобраТСння, Π½Ρ–ΠΆ Π²Ρ–Π΄ Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΡŒ, Ρ‰ΠΎ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Π½Ρ– ΠΎΠΊΡ€Π΅ΠΌΠΎ, ΠΊΠΎΠΆΠ½Π΅ своїм ΠΊΠ°Π½Π°Π»ΠΎΠΌ. Π¦Π΅ Π·Π½Π°Ρ‡Π½ΠΎ ΠΏΠΎΠΊΡ€Π°Ρ‰ΡƒΡ” ΡΠΊΡ–ΡΡŒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ ΠΎΠΏΠ΅Ρ€Π°Ρ‚ΠΎΡ€Π°, Ρ‰ΠΎ ΠΏΡ€Π°Ρ†ΡŽΡ” Π· Π½ΠΈΠΌ. Π’ΠΎΠΌΡƒ Ρ” Π°ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½ΠΈΠΌ ΠΏΡ–Π΄Π²ΠΈΡ‰ΡƒΠ²Π°Ρ‚ΠΈ Π΅Ρ„Π΅ΠΊΡ‚ΠΈΠ²Π½Ρ–ΡΡ‚ΡŒ Ρ–ΡΠ½ΡƒΡŽΡ‡ΠΈΡ… ΠΌΠ΅Ρ‚ΠΎΠ΄Ρ–Π² комплСксування Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΡŒ, Ρ‰ΠΎ Π±ΡƒΠ»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Π½Ρ– Π· Ρ€Ρ–Π·Π½ΠΈΡ… ΠΊΠ°Π½Π°Π»Ρ–Π². ΠœΠ΅Ρ‚Π° дослідТСння: ΠŸΠΎΠΊΡ€Π°Ρ‰ΠΈΡ‚ΠΈ споТивчі якості ΠΎΠΏΡ‚ΠΈΠΊΠΎ-Π΅Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Π½ΠΈΡ… систСм спостСрСТСння. Завдання дослідТСння: 1. ΠžΠ³Π»ΡΠ½ΡƒΡ‚ΠΈ Π½Π°ΠΉΠΏΠΎΠΏΡƒΠ»ΡΡ€Π½Ρ–ΡˆΡ– Ρ€Ρ–ΡˆΠ΅Π½Π½Ρ злиття Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΡŒ; 2. ΠžΠ±Ρ€Π°Ρ‚ΠΈ ΠΎΠ΄ΠΈΠ½ Π· ΠΌΠ΅Ρ‚ΠΎΠ΄Ρ–Π² Ρ‚Π° Π·Π°ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΠ²Π°Ρ‚ΠΈ ΠΉΠΎΠ³ΠΎ покращСння; 3. Дослідити Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΈ покращСння Π·Π°ΠΏΡ€ΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½ΠΈΡ… ΠΌΠ΅Ρ‚ΠΎΠ΄Ρ–Π². Об'Ρ”ΠΊΡ‚ дослідТСння: ΠžΠΏΡ‚ΠΈΠΊΠΎ Π΅Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Π½Π° систСма Π· Π΄Π²ΠΎΠΌΠ° ΡΠΏΠ΅ΠΊΡ‚Ρ€Π°Π»ΡŒΠ½ΠΈΠΌΠΈ ΠΊΠ°Π½Π°Π»Π°ΠΌΠΈ. ΠŸΡ€Π΅Π΄ΠΌΠ΅Ρ‚ дослідТСння: ΠŸΡ–Π΄Π²ΠΈΡ‰Π΅Π½Π½Ρ ймовірності виявлСння ΠΎΠ±'Ρ”ΠΊΡ‚Ρ–Π² Π² Π΄Π²ΠΎΡ…ΠΊΠ°Π½Π°Π»ΡŒΠ½ΠΈΡ… систСмах.Topic relevance. Nowadays, the essential task is the identification and recognition purposes. Composition of the images showed significant qualitative and quantitative benefits in solving the problems of detection, differentiation, recognition, tracking and targeting. Compilation allows you to get a more informative resultant image than from the images taken separately, each with its own channel. That why, it is important to increase the efficiency of existing methods of image fusion, which were obtained from various channels. Research goal: Improve the quality of consumer optoelectronic surveillance systems. Research objectives: 1. Explore the most popular images fusion solution; 2. Choose one of the methods and suggest improvements; 3. Explore the proposed methods to improve results. Object of research: Optoelectronic system with two spectral channels. Subject of research: Increased probability of detection of a dual system

    A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform

    Full text link
    Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts. The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified
    corecore