88,050 research outputs found

    Automated Image Registration Using Morphological Region of Interest Feature Extraction

    Get PDF
    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching

    Patch similarity under non Gaussian noise

    No full text
    International audienceMany tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared intensity differences. When the noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature. We review seven of those criteria taken from the fields of image processing, detection theory and machine learning. We discuss their theoretical grounding and provide a numerical comparison of their performance under Gamma and Poisson noises

    Anatomical landmark based registration of contrast enhanced T1-weighted MR images

    Get PDF
    In many problems involving multiple image analysis, an im- age registration step is required. One such problem appears in brain tumor imaging, where baseline and follow-up image volumes from a tu- mor patient are often to-be compared. Nature of the registration for a change detection problem in brain tumor growth analysis is usually rigid or affine. Contrast enhanced T1-weighted MR images are widely used in clinical practice for monitoring brain tumors. Over this modality, con- tours of the active tumor cells and whole tumor borders and margins are visually enhanced. In this study, a new technique to register serial contrast enhanced T1 weighted MR images is presented. The proposed fully-automated method is based on five anatomical landmarks: eye balls, nose, confluence of sagittal sinus, and apex of superior sagittal sinus. Af- ter extraction of anatomical landmarks from fixed and moving volumes, an affine transformation is estimated by minimizing the sum of squared distances between the landmark coordinates. Final result is refined with a surface registration, which is based on head masks confined to the sur- face of the scalp, as well as to a plane constructed from three of the extracted features. The overall registration is not intensity based, and it depends only on the invariant structures. Validation studies using both synthetically transformed MRI data, and real MRI scans, which included several markers over the head of the patient were performed. In addition, comparison studies against manual landmarks marked by a radiologist, as well as against the results obtained from a typical mutual information based method were carried out to demonstrate the effectiveness of the proposed method

    Subspace-Based Holistic Registration for Low-Resolution Facial Images

    Get PDF
    Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration

    Saliency-guided integration of multiple scans

    Get PDF
    we present a novel method..

    Learning to Detect and Track Cells for Quantitative Analysis of Time-Lapse Microscopic Image Sequences

    Get PDF
    © 2015 IEEE.Studying the behaviour of cells using time-lapse microscopic imaging requires automated processing pipelines that enable quantitative analysis of a large number of cells. We propose a pipeline based on state-of-the-art methods for background motion compensation, cell detection, and tracking which are integrated into a novel semi-automated, learning based analysis tool. Motion compensation is performed by employing an efficient nonlinear registration method based on powerful discrete graph optimisation. Robust detection and tracking of cells is based on classifier learning which only requires a small number of manual annotations. Cell motion trajectories are generated using a recent global data association method and linear programming. Our approach is robust to the presence of significant motion and imaging artifacts. Promising results are presented on different sets of in-vivo fluorescent microscopic image sequences
    corecore