110 research outputs found

    Locally Orderless Registration

    Get PDF
    Image registration is an important tool for medical image analysis and is used to bring images into the same reference frame by warping the coordinate field of one image, such that some similarity measure is minimized. We study similarity in image registration in the context of Locally Orderless Images (LOI), which is the natural way to study density estimates and reveals the 3 fundamental scales: the measurement scale, the intensity scale, and the integration scale. This paper has three main contributions: Firstly, we rephrase a large set of popular similarity measures into a common framework, which we refer to as Locally Orderless Registration, and which makes full use of the features of local histograms. Secondly, we extend the theoretical understanding of the local histograms. Thirdly, we use our framework to compare two state-of-the-art intensity density estimators for image registration: The Parzen Window (PW) and the Generalized Partial Volume (GPV), and we demonstrate their differences on a popular similarity measure, Normalized Mutual Information (NMI). We conclude, that complicated similarity measures such as NMI may be evaluated almost as fast as simple measures such as Sum of Squared Distances (SSD) regardless of the choice of PW and GPV. Also, GPV is an asymmetric measure, and PW is our preferred choice.Comment: submitte

    Interpolating point spread function anisotropy

    Full text link
    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). We also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σsys2\sigma_{sys}^2 better than the 1×1071\times10^{-7} upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones

    Multi-scale active shape description in medical imaging

    Get PDF
    Shape description in medical imaging has become an increasingly important research field in recent years. Fast and high-resolution image acquisition methods like Magnetic Resonance (MR) imaging produce very detailed cross-sectional images of the human body - shape description is then a post-processing operation which abstracts quantitative descriptions of anatomically relevant object shapes. This task is usually performed by clinicians and other experts by first segmenting the shapes of interest, and then making volumetric and other quantitative measurements. High demand on expert time and inter- and intra-observer variability impose a clinical need of automating this process. Furthermore, recent studies in clinical neurology on the correspondence between disease status and degree of shape deformations necessitate the use of more sophisticated, higher-level shape description techniques. In this work a new hierarchical tool for shape description has been developed, combining two recently developed and powerful techniques in image processing: differential invariants in scale-space, and active contour models. This tool enables quantitative and qualitative shape studies at multiple levels of image detail, exploring the extra image scale degree of freedom. Using scale-space continuity, the global object shape can be detected at a coarse level of image detail, and finer shape characteristics can be found at higher levels of detail or scales. New methods for active shape evolution and focusing have been developed for the extraction of shapes at a large set of scales using an active contour model whose energy function is regularized with respect to scale and geometric differential image invariants. The resulting set of shapes is formulated as a multiscale shape stack which is analysed and described for each scale level with a large set of shape descriptors to obtain and analyse shape changes across scales. This shape stack leads naturally to several questions in regard to variable sampling and appropriate levels of detail to investigate an image. The relationship between active contour sampling precision and scale-space is addressed. After a thorough review of modem shape description, multi-scale image processing and active contour model techniques, the novel framework for multi-scale active shape description is presented and tested on synthetic images and medical images. An interesting result is the recovery of the fractal dimension of a known fractal boundary using this framework. Medical applications addressed are grey-matter deformations occurring for patients with epilepsy, spinal cord atrophy for patients with Multiple Sclerosis, and cortical impairment for neonates. Extensions to non-linear scale-spaces, comparisons to binary curve and curvature evolution schemes as well as other hierarchical shape descriptors are discussed

    Numerical simulations of dwarf galaxies in the Fornax Cluster

    Get PDF
    I have carried out simulations of the evolution of dwarf galaxies falling into a Fornax-like Cluster using the Moving Box technique. I am interested in following the journey of the galaxies into the cluster and characterizing their size, star formation rate, gas and dark matter content, stellar dynamics, and evolution, depending on the orbit and the initial mass at the time of orbital injection. Some of the galaxies are effectively transformed into Ultra Diffuse Galaxies (UDG) while some others are allowed to be briefly identified as “jellyfish". Serendipitously, I realized that these simulations produce galaxies whose morphology is similar to a particular galaxy in the Fornax Cluster: NGC1427A. I identified that gaseous and stellar tails of this galaxy may be explainable given that they are subject to different environmental effects (ram-pressure stripping and tidal forces). I was also able to provide some falsifiable predictions on the position of the galaxy with respect to the center of the Cluster and its projected orbital direction. Finally, I have contributed to the development of a technique to study low dimensional-manifolds in the simulations. In particular, I concentrated on the analysis of gaseous tails of simulated jellyfish galaxies with the aim to investigate regions of recent star formation and mixing between the galactic gaseous material and the hot gas of the cluster

    Tanner Graph Based Image Interpolation

    Full text link
    This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are 'received' whereas other missing pixels of high-resolution image are 'erased', through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques. ? 2010 IEEE.EI

    Bayesian galaxy shape measurement for weak lensing surveys – III. Application to the Canada–France–Hawaii Telescope Lensing Survey

    Get PDF
    A likelihood-based method for measuring weak gravitational lensing shear in deep galaxy surveys is described and applied to the Canada–France–Hawaii Telescope (CFHT) Lensing Survey (CFHTLenS). CFHTLenS comprises 154 deg^2 of multi-colour optical data from the CFHT Legacy Survey, with lensing measurements being made in the i′ band to a depth i′_(AB) < 24.7, for galaxies with signal-to-noise ratio ν_(SN) ≳ 10. The method is based on the lensfit algorithm described in earlier papers, but here we describe a full analysis pipeline that takes into account the properties of real surveys. The method creates pixel-based models of the varying point spread function (PSF) in individual image exposures. It fits PSF-convolved two-component (disc plus bulge) models to measure the ellipticity of each galaxy, with Bayesian marginalization over model nuisance parameters of galaxy position, size, brightness and bulge fraction. The method allows optimal joint measurement of multiple, dithered image exposures, taking into account imaging distortion and the alignment of the multiple measurements. We discuss the effects of noise bias on the likelihood distribution of galaxy ellipticity. Two sets of image simulations that mirror the observed properties of CFHTLenS have been created to establish the method's accuracy and to derive an empirical correction for the effects of noise bias

    A practical algorithm for tanner graph based image interpolation

    Full text link
    This paper interprets image interpolation as a decoding problem on tanner graph and proposes a practical belief propagation algorithm based on a gaussian autoregressive image model. This algorithm regards belief propagation as a way to generate and fuse predictions from various check nodes. A low complexity implementation of this algorithm measures and distributes the departure of current interpolation result from the image model. Convergence speed of the proposed algorithm is discussed. Experimental results show that good interpolation results can be obtained by a very small number of iterations.Engineering, Electrical &amp; ElectronicImaging Science &amp; Photographic TechnologyEICPCI-S(ISTP)

    New aperture photometry of QSO 0957+561; application to time delay and microlensing

    Full text link
    We present a re-reduction of archival CCD frames of the doubly imaged quasar 0957+561 using a new photometry code. Aperture photometry with corrections for both cross contamination between the quasar images and galaxy contamination is performed on about 2650 R-band images from a five year period (1992-1997). From the brightness data a time delay of 424.9 +/- 1.2 days is derived using two different statistical techniques. The amount of gravitational microlensing in the quasar light curves is briefly investigated, and we find unambiguous evidence of both long term and short term microlensing. We also note the unusual circumstance regarding time delay estimates for this gravitational lens. Estimates by different observers from different data sets or even with the same data sets give lag estimates differing by typically 8 days, and error bars of only a day or two. This probably indicates several complexities where the result of each estimate depends upon the details of the calculation.Comment: 14 pages, 16 figures (several in color
    corecore