553 research outputs found

    M\"obius Invariants of Shapes and Images

    Full text link
    Identifying when different images are of the same object despite changes caused by imaging technologies, or processes such as growth, has many applications in fields such as computer vision and biological image analysis. One approach to this problem is to identify the group of possible transformations of the object and to find invariants to the action of that group, meaning that the object has the same values of the invariants despite the action of the group. In this paper we study the invariants of planar shapes and images under the M\"obius group PSL(2,C)\mathrm{PSL}(2,\mathbb{C}), which arises in the conformal camera model of vision and may also correspond to neurological aspects of vision, such as grouping of lines and circles. We survey properties of invariants that are important in applications, and the known M\"obius invariants, and then develop an algorithm by which shapes can be recognised that is M\"obius- and reparametrization-invariant, numerically stable, and robust to noise. We demonstrate the efficacy of this new invariant approach on sets of curves, and then develop a M\"obius-invariant signature of grey-scale images

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    A New Affine Invariant Image Transform Based on Ridgelets

    Full text link

    Shape localization, quantification and correspondence using Region Matching Algorithm

    Get PDF
    We propose a method for local, region-based matching of planar shapes, especially as those shapes that change over time. This is a problem fundamental to medical imaging, specifically the comparison over time of mammograms. The method is based on the non-emergence and non-enhancement of maxima, as well as the causality principle of integral invariant scale space. The core idea of our Region Matching Algorithm (RMA) is to divide a shape into a number of “salient” regions and then to compare all such regions for local similarity in order to quantitatively identify new growths or partial/complete occlusions. The algorithm has several advantages over commonly used methods for shape comparison of segmented regions. First, it provides improved key-point alignment for optimal shape correspondence. Second, it identifies localized changes such as new growths as well as complete/partial occlusion in corresponding regions by dividing the segmented region into sub-regions based upon the extrema that persist over a sufficient range of scales. Third, the algorithm does not depend upon the spatial locations of mammographic features and eliminates the need for registration to identify salient changes over time. Finally, the algorithm is fast to compute and requires no human intervention. We apply the method to temporal pairs of mammograms in order to detect potentially important differences between them

    Shape description and matching using integral invariants on eccentricity transformed images

    Get PDF
    Matching occluded and noisy shapes is a problem frequently encountered in medical image analysis and more generally in computer vision. To keep track of changes inside the breast, for example, it is important for a computer aided detection system to establish correspondences between regions of interest. Shape transformations, computed both with integral invariants (II) and with geodesic distance, yield signatures that are invariant to isometric deformations, such as bending and articulations. Integral invariants describe the boundaries of planar shapes. However, they provide no information about where a particular feature lies on the boundary with regard to the overall shape structure. Conversely, eccentricity transforms (Ecc) can match shapes by signatures of geodesic distance histograms based on information from inside the shape; but they ignore the boundary information. We describe a method that combines the boundary signature of a shape obtained from II and structural information from the Ecc to yield results that improve on them separately

    Medical Image Registration Using Artificial Neural Network

    Get PDF
    Image registration is the transformation of different sets of images into one coordinate system in order to align and overlay multiple images. Image registration is used in many fields such as medical imaging, remote sensing, and computer vision. It is very important in medical research, where multiple images are acquired from different sensors at various points in time. This allows doctors to monitor the effects of treatments on patients in a certain region of interest over time. In this thesis, artificial neural networks with curvelet keypoints are used to estimate the parameters of registration. Simulations show that the curvelet keypoints provide more accurate results than using the Discrete Cosine Transform (DCT) coefficients and Scale Invariant Feature Transform (SIFT) keypoints on rotation and scale parameter estimation

    Variational methods for shape and image registrations.

    Get PDF
    Estimating and analysis of deformation, either rigid or non-rigid, is an active area of research in various medical imaging and computer vision applications. Its importance stems from the inherent inter- and intra-variability in biological and biomedical object shapes and from the dynamic nature of the scenes usually dealt with in computer vision research. For instance, quantifying the growth of a tumor, recognizing a person\u27s face, tracking a facial expression, or retrieving an object inside a data base require the estimation of some sort of motion or deformation undergone by the object of interest. To solve these problems, and other similar problems, registration comes into play. This is the process of bringing into correspondences two or more data sets. Depending on the application at hand, these data sets can be for instance gray scale/color images or objects\u27 outlines. In the latter case, one talks about shape registration while in the former case, one talks about image/volume registration. In some situations, the combinations of different types of data can be used complementarily to establish point correspondences. One of most important image analysis tools that greatly benefits from the process of registration, and which will be addressed in this dissertation, is the image segmentation. This process consists of localizing objects in images. Several challenges are encountered in image segmentation, including noise, gray scale inhomogeneities, and occlusions. To cope with such issues, the shape information is often incorporated as a statistical model into the segmentation process. Building such statistical models requires a good and accurate shape alignment approach. In addition, segmenting anatomical structures can be accurately solved through the registration of the input data set with a predefined anatomical atlas. Variational approaches for shape/image registration and segmentation have received huge interest in the past few years. Unlike traditional discrete approaches, the variational methods are based on continuous modelling of the input data through the use of Partial Differential Equations (PDE). This brings into benefit the extensive literature on theory and numerical methods proposed to solve PDEs. This dissertation addresses the registration problem from a variational point of view, with more focus on shape registration. First, a novel variational framework for global-to-local shape registration is proposed. The input shapes are implicitly represented through their signed distance maps. A new Sumof- Squared-Differences (SSD) criterion which measures the disparity between the implicit representations of the input shapes, is introduced to recover the global alignment parameters. This new criteria has the advantages over some existing ones in accurately handling scale variations. In addition, the proposed alignment model is less expensive computationally. Complementary to the global registration field, the local deformation field is explicitly established between the two globally aligned shapes, by minimizing a new energy functional. This functional incrementally and simultaneously updates the displacement field while keeping the corresponding implicit representation of the globally warped source shape as close to a signed distance function as possible. This is done under some regularization constraints that enforce the smoothness of the recovered deformations. The overall process leads to a set of coupled set of equations that are simultaneously solved through a gradient descent scheme. Several applications, where the developed tools play a major role, are addressed throughout this dissertation. For instance, some insight is given as to how one can solve the challenging problem of three dimensional face recognition in the presence of facial expressions. Statistical modelling of shapes will be presented as a way of benefiting from the proposed shape registration framework. Second, this dissertation will visit th

    Purposive three-dimensional reconstruction by means of a controlled environment

    Get PDF
    Retrieving 3D data using imaging devices is a relevant task for many applications in medical imaging, surveillance, industrial quality control, and others. As soon as we gain procedural control over parameters of the imaging device, we encounter the necessity of well-defined reconstruction goals and we need methods to achieve them. Hence, we enter next-best-view planning. In this work, we present a formalization of the abstract view planning problem and deal with different planning aspects, whereat we focus on using an intensity camera without active illumination. As one aspect of view planning, employing a controlled environment also provides the planning and reconstruction methods with additional information. We incorporate the additional knowledge of camera parameters into the Kanade-Lucas-Tomasi method used for feature tracking. The resulting Guided KLT tracking method benefits from a constrained optimization space and yields improved accuracy while regarding the uncertainty of the additional input. Serving other planning tasks dealing with known objects, we propose a method for coarse registration of 3D surface triangulations. By the means of exact surface moments of surface triangulations we establish invariant surface descriptors based on moment invariants. These descriptors allow to tackle tasks of surface registration, classification, retrieval, and clustering, which are also relevant to view planning. In the main part of this work, we present a modular, online approach to view planning for 3D reconstruction. Based on the outcome of the Guided KLT tracking, we design a planning module for accuracy optimization with respect to an extended E-criterion. Further planning modules endow non-discrete surface estimation and visibility analysis. The modular nature of the proposed planning system allows to address a wide range of specific instances of view planning. The theoretical findings in this work are underlined by experiments evaluating the relevant terms

    Panoramic Image Generation Algorithm based on Hu’s Moment Invariants

    Get PDF
    The construction of large, high-resolution image is an active area of research in the fields of computer vision, image processing, and computer graphics. A Panorama is the process of combining multiple images with overlapping fields of view to produce a panorama. It is possible to produce a complete view of an area or location that cannot fit in a single shot. In this paper a high performance method for generating panoramic image is introduced. The process to generate a panoramic view can be divided into three main components: image acquisition, image registration, and merging. Geometric moment invariant produces a set of feature vectors that are invariant under shifting, scaling and rotation. The technique is widely used to extract the global features for pattern recognition due to its discrimination power and robustness. In this paper, moment invariant is used to determine the locations of merging the images to produce the panorama. The final step is adjusting the colors of the merged images. The results of experiments conducted on images taken by camera and test images loaded from the internet. The results show that the proposed algorithm is fast and efficient
    corecore