6 research outputs found

    Visible and Infrared Image Registration Employing Line-Based Geometric Analysis

    Get PDF
    Abstract. We present a new method to register a pair of visible (ViS) and infrared (IR) images. Unlike most of existing systems that align interest points of two images, we align lines derived from edge pixels, because the interest points extracted from both images are not always identical, but most major edges detected from one image do appear in another image. To solve feature matching problem, we emphasize the geometric structure alignment of features (lines), instead of descriptor-based individual feature matching. This is due to the fact that image properties and patch statistics of corresponding features might be quite different, especially when one compares ViS image with long wave IR images (thermal information). However, the spatial layout of features for both images always preserves consistency. The last step of our algorithm is to compute the image transform matrix, given minimum 4 pairs of line correspondence. The comparative evaluation for algorithms demon-strates higher accuracy attained by our method when compared to the state-of-the-art approaches.

    Visible and Infrared Image Registration Employing Line-Based Geometric Analysis

    Get PDF
    We present a new method to register a pair of visible (ViS) and infrared (IR) images. Unlike most of existing systems that align interest points of two images, we align lines derived from edge pixels, because the interest points extracted from both images are not always identical, but most major edges detected from one image do appear in another image. To solve feature matching problem, we emphasize the geometric structure alignment of features (lines), instead of descriptor-based individual feature matching. This is due to the fact that image properties and patch statistics of corresponding features might be quite different, especially when one compares ViS image with long wave IR images (thermal information). However, the spatial layout of features for both images always preserves consistency. The last step of our algorithm is to compute the image transform matrix, given minimum 4 pairs of line correspondence. The comparative evaluation for algorithms demonstrates higher accuracy attained by our method when compared to the state-of-the-art approaches

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods
    corecore