19 research outputs found

    Identifiability of multivariate logistic mixture models

    Full text link
    Mixture models have been widely used in modeling of continuous observations. For the possibility to estimate the parameters of a mixture model consistently on the basis of observations from the mixture, identifiability is a necessary condition. In this study, we give some results on the identifiability of multivariate logistic mixture models

    GAUSSIAN MIXTURE MODEL AND RJMCMC BASED RS IMAGE SEGMENTATION

    Get PDF

    Approche variationnelle pour le calcul bay\'esien dans les probl\`emes inverses en imagerie

    Full text link
    In a non supervised Bayesian estimation approach for inverse problems in imaging systems, one tries to estimate jointly the unknown image pixels ff and the hyperparameters θ\theta given the observed data gg and a model MM linking these quantities. This is, in general, done through the joint posterior law p(f,θ∣g;M)p(f,\theta|g;M). The expression of this joint law is often very complex and its exploration through sampling and computation of the point estimators such as MAP and posterior means need either optimization of or integration of multivariate probability laws. In any of these cases, we need to do approximations. Laplace approximation and sampling by MCMC are two approximation methods, respectively analytical and numerical, which have been used before with success for this task. In this paper, we explore the possibility of approximating this joint law by a separable one in ff and in θ\theta. This gives the possibility of developing iterative algorithms with more reasonable computational cost, in particular, if the approximating laws are choosed in the exponential conjugate families. The main objective of this paper is to give details of different algorithms we obtain with different choices of these families. To illustrate more in detail this approach, we consider the case of image restoration by simple or myopic deconvolution with separable, simple markovian or hidden markovian models.Comment: 31 pages, 2 figures, had been submitted to "Revue Traitement du signal", but not accepte

    Unsupervised classification of multilook polarimetric SAR data using spatially variant wishart mixture model with double constraints

    Get PDF
    This paper addresses the unsupervised classification problems for multilook Polarimetric synthetic aperture radar (PolSAR) images by proposing a patch-level spatially variant Wishart mixture model (SVWMM) with double constraints. We construct this model by jointly modeling the pixels in a patch (rather than an individual pixel) so as to effectively capture the local correlation in the PolSAR images. More importantly, a responsibility parameter is introduced to the proposed model, providing not only the possibility to represent the importance of different pixels within a patch but also the additional flexibility for incorporating the spatial information. As such, double constraints are further imposed by simultaneously utilizing the similarities of the neighboring pixels, respectively, defined on two different parameter spaces (i.e., the hyperparameter in the posterior distribution of mixing coefficients and the responsibility parameter). Furthermore, the variational inference algorithm is developed to achieve effective learning of the proposed SVWMM with the closed-form updates, facilitating the automatic determination of the cluster number. Experimental results on several PolSAR data sets from both airborne and spaceborne sensors demonstrate that the proposed method is effective and it enables better performances on unsupervised classification than the conventional methods

    A Physical Model for Microstructural Characterization and Segmentation of 3D Tomography Data

    Full text link
    We present a novel method for characterizing the microstructure of a material from volumetric datasets such as 3D image data from computed tomography (CT). The method is based on a new statistical model for the distribution of voxel intensities and gradient magnitudes, incorporating prior knowledge about the physical nature of the imaging process. It allows for direct quantification of parameters of the imaged sample like volume fractions, interface areas and material density, and parameters related to the imaging process like image resolution and noise levels. Existing methods for characterization from 3D images often require segmentation of the data, a procedure where each voxel is labeled according to the best guess of which material it represents. Through our approach, the segmentation step is circumvented so that errors and computational costs related to this part of the image processing pipeline are avoided. Instead, the material parameters are quantified through their known relation to parameters of our model which is fitted directly to the raw, unsegmented data. We present an automated model fitting procedure that gives reproducible results without human bias and enables automatic analysis of large sets of tomograms. For more complex structure analysis questions, a segmentation is still beneficial. We show that our model can be used as input to existing probabilistic methods, providing a segmentation that is based on the physics of the imaged sample. Because our model accounts for mixed-material voxels stemming from blurring inherent to the imaging technique, we reduce the errors that other methods can create at interfaces between materials.Comment: Manuscript accepted for publication in Materials Characterizatio

    Estimating Gaussian mixtures using sparse polynomial moment systems

    Full text link
    The method of moments is a statistical technique for density estimation that solves a system of moment equations to estimate the parameters of an unknown distribution. A fundamental question critical to understanding identifiability asks how many moment equations are needed to get finitely many solutions and how many solutions there are. We answer this question for classes of Gaussian mixture models using the tools of polyhedral geometry. Using these results, we present an algorithm that performs parameter recovery, and therefore density estimation, for high dimensional Gaussian mixture models that scales linearly in the dimension.Comment: 30 page

    Proportional Data Modeling using Unsupervised Learning and Applications

    Get PDF
    In this thesis, we propose the consideration of Aitchison’s distance in K-means clustering algorithm. It has been used for initialization of Dirichlet and generalized Dirichlet mixture models. This activity is then followed by that of estimating model parameters using Expectation-Maximization algorithm. This method has been further exploited by using it for intrusion detection where we statistically analyze entire NSL-KDD data-set. In addition, we present an unsupervised learning algorithm for finite mixture models with the integration of spatial information using Markov random field (MRF). The mixture model is based on Dirichlet and generalized Dirichlet distributions. This method uses Markov random field to incorporate spatial information between neighboring pixels into a mixture model. This segmentation model is also learned by Expectation-Maximization algorithm using Newton-Raphson approach. The obtained results using real images data-sets are more encouraging than those obtained using similar approaches
    corecore