2,185 research outputs found

    Gaussian Process Morphable Models

    Get PDF
    Statistical shape models (SSMs) represent a class of shapes as a normal distribution of point variations, whose parameters are estimated from example shapes. Principal component analysis (PCA) is applied to obtain a low-dimensional representation of the shape variation in terms of the leading principal components. In this paper, we propose a generalization of SSMs, called Gaussian Process Morphable Models (GPMMs). We model the shape variations with a Gaussian process, which we represent using the leading components of its Karhunen-Loeve expansion. To compute the expansion, we make use of an approximation scheme based on the Nystrom method. The resulting model can be seen as a continuous analogon of an SSM. However, while for SSMs the shape variation is restricted to the span of the example data, with GPMMs we can define the shape variation using any Gaussian process. For example, we can build shape models that correspond to classical spline models, and thus do not require any example data. Furthermore, Gaussian processes make it possible to combine different models. For example, an SSM can be extended with a spline model, to obtain a model that incorporates learned shape characteristics, but is flexible enough to explain shapes that cannot be represented by the SSM. We introduce a simple algorithm for fitting a GPMM to a surface or image. This results in a non-rigid registration approach, whose regularization properties are defined by a GPMM. We show how we can obtain different registration schemes,including methods for multi-scale, spatially-varying or hybrid registration, by constructing an appropriate GPMM. As our approach strictly separates modelling from the fitting process, this is all achieved without changes to the fitting algorithm. We show the applicability and versatility of GPMMs on a clinical use case, where the goal is the model-based segmentation of 3D forearm images

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Improving Computation for Hierarchical Bayesian Spatial Gaussian Mixture Models with Application to the Analysis of THz image of Breast Tumor

    Get PDF
    In the first chapter of this dissertation we give a brief introduction to Markov chain Monte Carlo methods (MCMC) and their application in Bayesian inference. In particular, we discuss the Metropolis-Hastings and conjugate Gibbs algorithms and explore the computational underpinnings of these methods. The second chapter discusses how to incorporate spatial autocorrelation in linear a regression model with an emphasis on the computational framework for estimating the spatial correlation patterns. The third chapter starts with an overview of Gaussian mixture models (GMMs). However, because in the GMM framework the observations are assumed to be independent, GMMs are less effective when the mixture data exhibits spatial autocorrelation. To improve the performance of GMMs on spatially-correlated mixture data, chapter three describes a spatially correlated model that uses Gaussian process priors to account for the autocorrelation in the classifications. However, the inclusion of spatially correlated Gaussian processes results in a computational burden which is resolved by applying a P\`{o}lya-gamma data augmentation scheme that results in improved fit of the GMM in spatially correlated mixtures. Chapter three then compares the performance of the GMM and spatial GMM models on simulated data with and without spatial autocorrelation in the class labels. Both qualitative and quantitative model evaluation results support our assumption that the spatial GMM performs better when observation are spatially-autocorrelated. Chapter four applies the spatial Gaussian mixture model from chapter three to data obtained from ongoing work that aims to improve the accuracy in breast cancer margin assessment using THz imaging technology. In particular, the Bayesian estimate of uncertainty in the posterior probability from the spatial GMM shows promise in addressing the primary clinical question of determining the cancerous tumor margins

    Stel component analysis: Modeling spatial correlations in image class structure

    Get PDF

    Stel Component Analysis: Joint Segmentation, Modeling and Recognition of Objects Classes

    Get PDF
    Models that captures the common structure of an object class have appeared few years ago in the literature (Jojic and Caspi in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 212---219, 2004; Winn and Jojic in Proceedings of International Conference on Computer Vision (ICCV), pp. 756---763, 2005); they are often referred as "stel models." Their main characteristic is to segment objects in clear, often semantic, parts as a consequence of the modeling constraint which forces the regions belonging to a single segment to have a tight distribution over local measurements, such as color or texture. This self-similarity within a region in a single image is typical of many meaningful image parts, even when across different images of similar objects, the corresponding parts may not have similar local measurements. Moreover, the segmentation itself is expected to be consistent within a class, although still flexible. These models have been applied mostly to segmentation scenarios. In this paper, we extent those ideas (1) proposing to capture correlations that exist in structural elements of an image class due to global effects, (2) exploiting the segmentations to capture feature co-occurrences and (3) allowing the use of multiple, eventually sparse, observation of different nature. In this way we obtain richer models more suitable to recognition tasks. We accomplish these requirements using a novel approach we dubbed stel component analysis. Experimental results show the flexibility of the model as it can deal successfully with image/video segmentation and object recognition where, in particular, it can be used as an alternative of, or in conjunction with, bag-of-features and related classifiers, where stel inference provides a meaningful spatial partition of features

    Collaborative sparse regression using spatially correlated supports - Application to hyperspectral unmixing

    Get PDF
    This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organised rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e, there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Secondly, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the MMSE estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularisation parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature

    Combining spatial priors and anatomical information for fMRI detection

    Get PDF
    In this paper, we analyze Markov Random Field (MRF) as a spatial regularizer in fMRI detection. The low signal-to-noise ratio (SNR) in fMRI images presents a serious challenge for detection algorithms, making regularization necessary to achieve good detection accuracy. Gaussian smoothing, traditionally employed to boost SNR, often produces over-smoothed activation maps. Recently, the use of MRF priors has been suggested as an alternative regularization approach. However, solving for an optimal configuration of the MRF is NP-hard in general. In this work, we investigate fast inference algorithms based on the Mean Field approximation in application to MRF priors for fMRI detection. Furthermore, we propose a novel way to incorporate anatomical information into the MRF-based detection framework and into the traditional smoothing methods. Intuitively speaking, the anatomical evidence increases the likelihood of activation in the gray matter and improves spatial coherency of the resulting activation maps within each tissue type. Validation using the receiver operating characteristic (ROC) analysis and the confusion matrix analysis on simulated data illustrates substantial improvement in detection accuracy using the anatomically guided MRF spatial regularizer. We further demonstrate the potential benefits of the proposed method in real fMRI signals of reduced length. The anatomically guided MRF regularizer enables significant reduction of the scan length while maintaining the quality of the resulting activation maps.National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/National Alliance for Medical Image Computing (U.S.) Grant U54-EB005149)National Science Foundation (U.S.) (Grant IIS 9610249)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Biomedical Informatics Research Network Grant U24-RR021382)National Institutes of Health (U.S.) (National Center for Research Resources (U.S.)/Neuroimaging Analysis Center (U.S.) Grant P41-RR13218)National Institutes of Health (U.S.) (National Institute of Neurological Disorders and Stroke (U.S.) Grant R01-NS051826)National Science Foundation (U.S.) (CAREER Grant 0642971)National Science Foundation (U.S.). Graduate Research FellowshipNational Center for Research Resources (U.S.) (FIRST-BIRN Grant)Neuroimaging Analysis Center (U.S.

    Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model

    Get PDF
    This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin
    corecore