6 research outputs found

    Π€ΠΎΡ€ΠΌΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ комплСксного изобраТСния Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности Π½Π° основС кластСризации пиксСлСй Π»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… снимков Π² ΠΌΠ½ΠΎΠ³ΠΎΠΏΠΎΠ·ΠΈΡ†ΠΈΠΎΠ½Π½ΠΎΠΉ Π±ΠΎΡ€Ρ‚ΠΎΠ²ΠΎΠΉ систСмС

    Get PDF
    ΠŸΡ€Π΅Π΄Π»Π°Π³Π°Π΅Ρ‚ΡΡ способ комплСксирования разноракурсных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ с ΠΏΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ΠΌ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации пиксСлСй ΠΊ исходным снимкам Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности. Π˜ΡΡ…ΠΎΠ΄Π½Ρ‹Π΅ разноракурсныС изобраТСния, сформированныС Π±ΠΎΡ€Ρ‚ΠΎΠ²ΠΎΠΉ Π°ΠΏΠΏΠ°Ρ€Π°Ρ‚ΡƒΡ€ΠΎΠΉ ΠΌΠ½ΠΎΠ³ΠΎΠΏΠΎΠ·ΠΈΡ†ΠΈΠΎΠ½Π½Ρ‹Ρ… Π»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… систСм, ΡΠΎΡΡ‚Ρ‹ΠΊΠΎΠ²Ρ‹Π²Π°ΡŽΡ‚ΡΡ Π² Π΅Π΄ΠΈΠ½Ρ‹ΠΉ составной снимок ΠΈ ΠΏΡ€ΠΈ ΠΏΠΎΠΌΠΎΡ‰ΠΈ высокоскоростного Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации пиксСлСй Ρ€Π΅Π΄ΡƒΡ†ΠΈΡ€ΡƒΡŽΡ‚ΡΡ Π΄ΠΎ Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… Ρ†Π²Π΅Ρ‚ΠΎΠ² с сохранСниСм Ρ…Π°Ρ€Π°ΠΊΡ‚Π΅Ρ€Π½Ρ‹Ρ… Π³Ρ€Π°Π½ΠΈΡ†. ΠžΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² Π³Π΅Π½Π΅Ρ€Π°Ρ†ΠΈΠΈ сСрии Ρ€Π°Π·Π±ΠΈΠ΅Π½ΠΈΠΉ с постСпСнно ΡƒΠ²Π΅Π»ΠΈΡ‡ΠΈΠ²Π°ΡŽΡ‰Π΅ΠΉΡΡ Π΄Π΅Ρ‚Π°Π»ΠΈΠ·Π°Ρ†ΠΈΠ΅ΠΉ Π·Π° счСт ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ числа кластСров. Π­Ρ‚Π° ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ позволяСт Π²Ρ‹Π±Ρ€Π°Ρ‚ΡŒ подходящиС разбиСния ΠΏΠ°Ρ€ состыкованных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· сСрии сгСнСрированных. На ΠΏΠ°Ρ€Π΅ ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· Π²Ρ‹Π±Ρ€Π°Π½Π½ΠΎΠ³ΠΎ разбиСния состыкованного снимка осущСствляСтся поиск ΠΎΠΏΠΎΡ€Π½Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹Ρ… ΠΊΠΎΠ½Ρ‚ΡƒΡ€ΠΎΠ². Для этих Ρ‚ΠΎΡ‡Π΅ΠΊ опрСдСляСтся Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ΅ ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ послС Π΅Π³ΠΎ примСнСния ΠΊ исходным снимкам осущСствляСтся ΠΎΡ†Π΅Π½ΠΊΠ° стСпСни коррСляции комплСксированного изобраТСния. Как ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΎΠΏΠΎΡ€Π½Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ ΠΊΠΎΠ½Ρ‚ΡƒΡ€Π°, Ρ‚Π°ΠΊ ΠΈ само искомоС Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ΅ ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Π½ΠΈΠ΅ уточняСтся Π΄ΠΎ Ρ‚Π΅Ρ… ΠΏΠΎΡ€, ΠΏΠΎΠΊΠ° ΠΎΡ†Π΅Π½ΠΊΠ° качСства комплСксирования Π½Π΅ Π±ΡƒΠ΄Π΅Ρ‚ ΠΏΡ€ΠΈΠ΅ΠΌΠ»Π΅ΠΌΠΎΠΉ. Π’ΠΈΠ΄ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ прСобразования подбираСтся ΠΏΠΎ Ρ€Π΅Π΄ΡƒΡ†ΠΈΡ€ΠΎΠ²Π°Π½Π½Ρ‹ΠΌ ΠΏΠΎ Ρ†Π²Π΅Ρ‚Ρƒ изобраТСниям, Π° Π·Π°Ρ‚Π΅ΠΌ примСняСтся ΠΊ исходным снимкам. Π­Ρ‚ΠΎΡ‚ процСсс повторяСтся для кластСризованных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ с большСй Π΄Π΅Ρ‚Π°Π»ΠΈΠ·Π°Ρ†ΠΈΠ΅ΠΉ Π² Ρ‚ΠΎΠΌ случаС, Ссли ΠΎΡ†Π΅Π½ΠΊΠ° качСства комплСксирования Π½Π΅ являСтся ΠΏΡ€ΠΈΠ΅ΠΌΠ»Π΅ΠΌΠΎΠΉ. ЦСлью настоящСго исслСдования являСтся Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΠ° способа, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡŽΡ‰Π΅Π³ΠΎ ΡΡ„ΠΎΡ€ΠΌΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ комплСксноС ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠ΅ Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности ΠΈΠ· Ρ€Π°Π·Π½ΠΎΡ„ΠΎΡ€ΠΌΠ°Ρ‚Π½Ρ‹Ρ… ΠΈ Ρ€Π°Π·Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹Ρ… снимков. Π’ Ρ€Π°Π±ΠΎΡ‚Π΅ прСдставлСны ΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΠΈΠ΅ особСнности способа комплСксирования. ΠŸΠ΅Ρ€Π²Π°Ρ ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΠ΅ Π΅Π΄ΠΈΠ½ΠΎΠ³ΠΎ составного изобраТСния ΠΈΠ· ΠΏΠ°Ρ€Ρ‹ состыкованных исходных снимков Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠΎΠΌ кластСризации пиксСлСй, Ρ‡Ρ‚ΠΎ позволяСт ΠΏΠΎΠ΄ΠΎΠ±Π½Ρ‹ΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ Π²Ρ‹Π΄Π΅Π»ΠΈΡ‚ΡŒ ΠΎΠ΄ΠΈΠ½Π°ΠΊΠΎΠ²Ρ‹Π΅ области Π½Π° Π΅Π³ΠΎ Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Ρ… частях. Вторая ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ΠΈΠΈ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ прСобразования ΠΏΠΎ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΌ Ρ‚ΠΎΡ‡ΠΊΠ°ΠΌ ΠΊΠΎΠ½Ρ‚ΡƒΡ€Π° Π½Π° ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚Π°Π½Π½ΠΎΠΉ ΠΏΠ°Ρ€Π΅ кластСризованных снимков, ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠ΅ ΠΈ примСняСтся ΠΊ исходным изобраТСниям для ΠΈΡ… комплСксирования. Π’ Ρ€Π°Π±ΠΎΡ‚Π΅ прСдставлСны Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹ формирования комплСксного изобраТСния ΠΊΠ°ΠΊ ΠΏΠΎ ΠΎΠ΄Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹ΠΌ (оптичСским) снимкам, Ρ‚Π°ΠΊ ΠΈ ΠΏΠΎ Ρ€Π°Π·Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹ΠΌ (Ρ€Π°Π΄ΠΈΠΎΠ»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹ΠΌ ΠΈ оптичСским) снимкам. ΠžΡ‚Π»ΠΈΡ‡ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΠΉ Ρ‡Π΅Ρ€Ρ‚ΠΎΠΉ ΠΏΡ€Π΅Π΄Π»Π°Π³Π°Π΅ΠΌΠΎΠ³ΠΎ способа являСтся ΡƒΠ»ΡƒΡ‡ΡˆΠ΅Π½ΠΈΠ΅ качСства формирования, ΠΏΠΎΠ²Ρ‹ΡˆΠ΅Π½ΠΈΠ΅ точности ΠΈ информативности ΠΈΡ‚ΠΎΠ³ΠΎΠ²ΠΎΠ³ΠΎ комплСксного изобраТСния Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности

    Π€ΠΎΡ€ΠΌΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ комплСксного изобраТСния Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности Π½Π° основС кластСризации пиксСлСй Π»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… снимков Π² ΠΌΠ½ΠΎΠ³ΠΎΠΏΠΎΠ·ΠΈΡ†ΠΈΠΎΠ½Π½ΠΎΠΉ Π±ΠΎΡ€Ρ‚ΠΎΠ²ΠΎΠΉ систСмС

    Get PDF
    The paper proposes a method for fusioning multi-angle images implementing the algorithm for quasi-optimal clustering of pixels to the original images of the land surface. The original multi-angle images formed by the onboard equipment of multi-positional location systems are docked into a single composite image and, using a high-speed algorithm for quasi-optimal pixel clustering, are reduced to several colors while maintaining characteristic boundaries. A feature of the algorithm of quasi-optimal pixel clustering is the generation of a series of partitions with gradually increasing detail due to a variable number of clusters. This feature allows you to choose an appropriate partition of a pair of docked images from the generated series. The search for reference points of the isolated contours is performed on a pair of images from the selected partition of the docked image. A functional transformation is determined for these points. And after it has been applied to the original images, the degree of correlation of the fused image is estimated. Both the position of the reference points of the contour and the desired functional transformation itself are refined until the evaluation of the fusion quality is acceptable. The type of functional transformation is selected according to the images reduced in color, which later is applied to the original images. This process is repeated for clustered images with greater detail in the event that the assessment of the fusion quality is not acceptable. The purpose of present study is to develop a method that allows synthesizing fused image of the land surface from heteromorphic and heterogeneous images. The paper presents the following features of the fusing method. The first feature is the processing of a single composite image from a pair of docked source images by the pixel clustering algorithm, what makes it possible to isolate the same areas in its different parts in a similar way. The second feature consists in determining the functional transformation by the isolated reference points of the contour on the processed pair of clustered images, which is later applied to the original images to combine them. The paper presents the results on the synthesis of a fused image both from homogeneous (optical) images and from heterogeneous (radar and optical) images. A distinctive feature of the developed method is to improve the quality of synthesis, increase the accuracy and information content of the final fused image of the land surface.  ΠŸΡ€Π΅Π΄Π»Π°Π³Π°Π΅Ρ‚ся способ комплСксирования разноракурсных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ с ΠΏΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ΠΌ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации пиксСлСй ΠΊ исходным снимкам Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности. Π˜ΡΡ…ΠΎΠ΄Π½Ρ‹Π΅ разноракурсныС изобраТСния, сформированныС Π±ΠΎΡ€Ρ‚ΠΎΠ²ΠΎΠΉ Π°ΠΏΠΏΠ°Ρ€Π°Ρ‚ΡƒΡ€ΠΎΠΉ ΠΌΠ½ΠΎΠ³ΠΎΠΏΠΎΠ·ΠΈΡ†ΠΈΠΎΠ½Π½Ρ‹Ρ… Π»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹Ρ… систСм, ΡΠΎΡΡ‚Ρ‹ΠΊΠΎΠ²Ρ‹Π²Π°ΡŽΡ‚ΡΡ Π² Π΅Π΄ΠΈΠ½Ρ‹ΠΉ составной снимок ΠΈ ΠΏΡ€ΠΈ ΠΏΠΎΠΌΠΎΡ‰ΠΈ высокоскоростного Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации пиксСлСй Ρ€Π΅Π΄ΡƒΡ†ΠΈΡ€ΡƒΡŽΡ‚ΡΡ Π΄ΠΎ Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… Ρ†Π²Π΅Ρ‚ΠΎΠ² с сохранСниСм Ρ…Π°Ρ€Π°ΠΊΡ‚Π΅Ρ€Π½Ρ‹Ρ… Π³Ρ€Π°Π½ΠΈΡ†. ΠžΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° ΠΊΠ²Π°Π·ΠΈΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠΉ кластСризации Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² Π³Π΅Π½Π΅Ρ€Π°Ρ†ΠΈΠΈ сСрии Ρ€Π°Π·Π±ΠΈΠ΅Π½ΠΈΠΉ с постСпСнно ΡƒΠ²Π΅Π»ΠΈΡ‡ΠΈΠ²Π°ΡŽΡ‰Π΅ΠΉΡΡ Π΄Π΅Ρ‚Π°Π»ΠΈΠ·Π°Ρ†ΠΈΠ΅ΠΉ Π·Π° счСт ΠΏΠ΅Ρ€Π΅ΠΌΠ΅Π½Π½ΠΎΠ³ΠΎ числа кластСров. Π­Ρ‚Π° ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ позволяСт Π²Ρ‹Π±Ρ€Π°Ρ‚ΡŒ подходящиС разбиСния ΠΏΠ°Ρ€ состыкованных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· сСрии сгСнСрированных. На ΠΏΠ°Ρ€Π΅ ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ ΠΈΠ· Π²Ρ‹Π±Ρ€Π°Π½Π½ΠΎΠ³ΠΎ разбиСния состыкованного снимка осущСствляСтся поиск ΠΎΠΏΠΎΡ€Π½Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹Ρ… ΠΊΠΎΠ½Ρ‚ΡƒΡ€ΠΎΠ². Для этих Ρ‚ΠΎΡ‡Π΅ΠΊ опрСдСляСтся Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ΅ ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ послС Π΅Π³ΠΎ примСнСния ΠΊ исходным снимкам осущСствляСтся ΠΎΡ†Π΅Π½ΠΊΠ° стСпСни коррСляции комплСксированного изобраТСния. Как ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΎΠΏΠΎΡ€Π½Ρ‹Ρ… Ρ‚ΠΎΡ‡Π΅ΠΊ ΠΊΠΎΠ½Ρ‚ΡƒΡ€Π°, Ρ‚Π°ΠΊ ΠΈ само искомоС Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ΅ ΠΏΡ€Π΅ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Π½ΠΈΠ΅ уточняСтся Π΄ΠΎ Ρ‚Π΅Ρ… ΠΏΠΎΡ€, ΠΏΠΎΠΊΠ° ΠΎΡ†Π΅Π½ΠΊΠ° качСства комплСксирования Π½Π΅ Π±ΡƒΠ΄Π΅Ρ‚ ΠΏΡ€ΠΈΠ΅ΠΌΠ»Π΅ΠΌΠΎΠΉ. Π’ΠΈΠ΄ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ прСобразования подбираСтся ΠΏΠΎ Ρ€Π΅Π΄ΡƒΡ†ΠΈΡ€ΠΎΠ²Π°Π½Π½Ρ‹ΠΌ ΠΏΠΎ Ρ†Π²Π΅Ρ‚Ρƒ изобраТСниям, Π° Π·Π°Ρ‚Π΅ΠΌ примСняСтся ΠΊ исходным снимкам. Π­Ρ‚ΠΎΡ‚ процСсс повторяСтся для кластСризованных ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΉ с большСй Π΄Π΅Ρ‚Π°Π»ΠΈΠ·Π°Ρ†ΠΈΠ΅ΠΉ Π² Ρ‚ΠΎΠΌ случаС, Ссли ΠΎΡ†Π΅Π½ΠΊΠ° качСства комплСксирования Π½Π΅ являСтся ΠΏΡ€ΠΈΠ΅ΠΌΠ»Π΅ΠΌΠΎΠΉ. ЦСлью настоящСго исслСдования являСтся Ρ€Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΠ° способа, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡŽΡ‰Π΅Π³ΠΎ ΡΡ„ΠΎΡ€ΠΌΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ комплСксноС ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠ΅ Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности ΠΈΠ· Ρ€Π°Π·Π½ΠΎΡ„ΠΎΡ€ΠΌΠ°Ρ‚Π½Ρ‹Ρ… ΠΈ Ρ€Π°Π·Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹Ρ… снимков. Π’ Ρ€Π°Π±ΠΎΡ‚Π΅ прСдставлСны ΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΠΈΠ΅ особСнности способа комплСксирования. ΠŸΠ΅Ρ€Π²Π°Ρ ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΠ΅ Π΅Π΄ΠΈΠ½ΠΎΠ³ΠΎ составного изобраТСния ΠΈΠ· ΠΏΠ°Ρ€Ρ‹ состыкованных исходных снимков Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠΎΠΌ кластСризации пиксСлСй, Ρ‡Ρ‚ΠΎ позволяСт ΠΏΠΎΠ΄ΠΎΠ±Π½Ρ‹ΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ Π²Ρ‹Π΄Π΅Π»ΠΈΡ‚ΡŒ ΠΎΠ΄ΠΈΠ½Π°ΠΊΠΎΠ²Ρ‹Π΅ области Π½Π° Π΅Π³ΠΎ Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Ρ… частях. Вторая ΠΎΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½ΠΈΠΈ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½ΠΎΠ³ΠΎ прСобразования ΠΏΠΎ Π²Ρ‹Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΌ Ρ‚ΠΎΡ‡ΠΊΠ°ΠΌ ΠΊΠΎΠ½Ρ‚ΡƒΡ€Π° Π½Π° ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚Π°Π½Π½ΠΎΠΉ ΠΏΠ°Ρ€Π΅ кластСризованных снимков, ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠ΅ ΠΈ примСняСтся ΠΊ исходным изобраТСниям для ΠΈΡ… комплСксирования. Π’ Ρ€Π°Π±ΠΎΡ‚Π΅ прСдставлСны Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ‹ формирования комплСксного изобраТСния ΠΊΠ°ΠΊ ΠΏΠΎ ΠΎΠ΄Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹ΠΌ (оптичСским) снимкам, Ρ‚Π°ΠΊ ΠΈ ΠΏΠΎ Ρ€Π°Π·Π½ΠΎΡ€ΠΎΠ΄Π½Ρ‹ΠΌ (Ρ€Π°Π΄ΠΈΠΎΠ»ΠΎΠΊΠ°Ρ†ΠΈΠΎΠ½Π½Ρ‹ΠΌ ΠΈ оптичСским) снимкам. ΠžΡ‚Π»ΠΈΡ‡ΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΠΉ Ρ‡Π΅Ρ€Ρ‚ΠΎΠΉ ΠΏΡ€Π΅Π΄Π»Π°Π³Π°Π΅ΠΌΠΎΠ³ΠΎ способа являСтся ΡƒΠ»ΡƒΡ‡ΡˆΠ΅Π½ΠΈΠ΅ качСства формирования, ΠΏΠΎΠ²Ρ‹ΡˆΠ΅Π½ΠΈΠ΅ точности ΠΈ информативности ΠΈΡ‚ΠΎΠ³ΠΎΠ²ΠΎΠ³ΠΎ комплСксного изобраТСния Π·Π΅ΠΌΠ½ΠΎΠΉ повСрхности

    Normalized Cut-based Saliency Detection by Adaptive Multi-Level Region Merging

    No full text
    Existing salient object detection models favor over-segmented regions upon which saliency is computed. Such local regions are less effective on representing object holistically and degrade emphasis of entire salient objects. As a result, existing methods often fail to highlight an entire object in complex background. Towards better grouping of objects and background, in this paper we consider graph cut, more specifically the Normalized graph cut (Ncut) for saliency detection. Since the Ncut partitions a graph in a normalized energy minimization fashion, resulting eigenvectors of the Ncut contain good cluster information that may group visual contents. Motivated by this, we directly induce saliency maps via eigenvectors of the Ncut, contributing to accurate saliency estimation of visual clusters. We implement the Ncut on a graph derived from a moderate number of superpixels. This graph captures both intrinsic color and edge information of image data. Starting from the superpixels, an adaptive multi-level region merging scheme is employed to seek such cluster information from Ncut eigenvectors. With developed saliency measures for each merged region, encouraging performance is obtained after across-level integration. Experiments by comparing with 13 existing methods on four benchmark datasets including MSRA-1000, SOD, SED and CSSD show the proposed method, Ncut saliency (NCS), results in uniform object enhancement and achieves comparable/better performance to the state-of-the-art methods

    Visual saliency prediction based on deep learning

    Get PDF
    The Human Visual System (HVS) has the ability to focus on specific parts of a scene, rather than the whole image. Human eye movement is also one of the primary functions used in our daily lives that helps us understand our surroundings. This phenomenon is one of the most active research topics in the computer vision and neuroscience fields. The outcomes that have been achieved by neural network methods in a variety of tasks have highlighted their ability to predict visual saliency. In particular, deep learning models have been used for visual saliency prediction. In this thesis, a deep learning method based on a transfer learning strategy is proposed (Chapter 2), wherein visual features in the convolutional layers are extracted from raw images to predict visual saliency (e.g., saliency map). Specifically, the proposed model uses the VGG-16 network (i.e., Pre-trained CNN model) for semantic segmentation. The proposed model is applied to several datasets, including TORONTO, MIT300, MIT1003, and DUT-OMRON, to illustrate its efficiency. The results of the proposed model are then quantitatively and qualitatively compared to classic and state-of-the-art deep learning models. In Chapter 3, I specifically investigate the performance of five state-of-the-art deep neural networks (VGG-16, ResNet-50, Xception, InceptionResNet-v2, and MobileNet-v2) for the task of visual saliency prediction. Five deep learning models were trained over the SALICON dataset and used to predict visual saliency maps using four standard datasets, namely TORONTO, MIT300, MIT1003, and DUT-OMRON. The results indicate that the ResNet-50 model outperforms the other four and provides a visual saliency map that is very close to human performance. In Chapter 4, a novel deep learning model based on a Fully Convolutional Network (FCN) architecture is proposed. The proposed model is trained in an end-to-end style and designed to predict visual saliency. The model is based on the encoder-decoder structure and includes two types of modules. The first has three stages of inception modules to improve multi-scale derivation and enhance contextual information. The second module includes one stage of the residual module to provide a more accurate recovery of information and to simplify optimization. The entire proposed model is fully trained from scratch to extract distinguishing features and to use a data augmentation technique to create variations in the images. The proposed model is evaluated using several benchmark datasets, including MIT300, MIT1003, TORONTO, and DUT-OMRON. The quantitative and qualitative experiment analyses demonstrate that the proposed model achieves superior performance for predicting visual saliency. In Chapter 5, I study the possibility of using deep learning techniques for Salient Object Detection (SOD) because this work is slightly related to the problem of Visual saliency prediction. Therefore, in this work, the capability of ten well-known pre-trained models for semantic segmentation, including FCNs, VGGs, ResNets, MobileNet-v2, Xception, and InceptionResNet-v2, are investigated. These models have been trained over an ImageNet dataset, fine-tuned on a MSRA-10K dataset, and evaluated using other public datasets, such as ECSSD, MSRA-B, DUTS, and THUR15k. The results illustrate the superiority of ResNet50 and ResNet18, which have Mean Absolute Errors (MAE) of approximately 0.93 and 0.92, respectively, compared to other well-known FCN models. Finally, conclusions are drawn, and possible future works are discussed in chapter 6
    corecore