4 research outputs found

    COMPARING ROTATION-ROBUST MECHANISMS IN LOCAL FEATURE MATCHING: HAND-CRAFTED VS. DEEP LEARNING ALGORITHMS

    Get PDF
    The objective of this research is to conduct a performance comparison between hand-crafted feature matching algorithms and deep learning-based counterparts in the context of rotational variances. Hand-crafted algorithms underwent testing utilizing FLANN (Fast Library for Approximate Nearest Neighbors) as the matcher and RANSAC (Random sample consensus) for outlier detection and elimination, contributing to enhanced accuracy in the results. Surprisingly, experiments revealed that hand-crafted algorithms could yield comparable or superior results to deep learning-based algorithms when exposed to rotational variances. Notably, the application of horizontally flipped images showcased a distinct advantage for deep learning-based algorithms, demonstrating significantly improved results compared to their hand-crafted counterparts. While deep learning-based algorithms exhibit technological advancements, the study found that hand-crafted algorithms like AKAZE and AKAZE-SIFT could effectively compete with their deep learning counterparts, particularly in scenarios involving rotational variances. However, the same level of competitiveness was not observed in horizontally flipped cases, where hand-crafted algorithms exhibited suboptimal results. Conversely, deep learning algorithms such as DELF demonstrated superior results and accuracy in horizontally flipped scenarios. The research underscores that the choice between hand-crafted and deep learning-based algorithms depends on the specific use case. Hand-crafted algorithms exhibit competitiveness, especially in addressing rotational variances, while deep learning-based algorithms, exemplified by DELF, excel in scenarios involving horizontally flipped images, showcasing the unique advantages each approach holds in different contexts

    MedGAN: Medical Image Translation using GANs

    Full text link
    Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.Comment: 16 pages, 8 figure
    corecore