1,355 research outputs found

    Bayesian source separation with mixture of Gaussians prior for sources and Gaussian prior for mixture coefficients

    Get PDF
    In this contribution, we present new algorithms to source separation for the case of noisy instantaneous linear mixture, within the Bayesian statistical framework. The source distribution prior is modeled by a mixture of Gaussians [Moulines97] and the mixing matrix elements distributions by a Gaussian [Djafari99a]. We model the mixture of Gaussians hierarchically by mean of hidden variables representing the labels of the mixture. Then, we consider the joint a posteriori distribution of sources, mixing matrix elements, labels of the mixture and other parameters of the mixture with appropriate prior probability laws to eliminate degeneracy of the likelihood function of variance parameters and we propose two iterative algorithms to estimate jointly sources, mixing matrix and hyperparameters: Joint MAP (Maximum a posteriori) algorithm and penalized EM algorithm. The illustrative example is taken in [Macchi99] to compare with other algorithms proposed in literature. Keywords: Source separation, Gaussian mixture, classification, JMAP algorithm, Penalized EM algorithm.Comment: Presented at MaxEnt00. Appeared in Bayesian Inference and Maximum Entropy Methods, Ali Mohammad-Djafari(Ed.), AIP Proceedings (http://proceedings.aip.org/proceedings/confproceed/568.jsp

    MCMC joint separation and segmentation of hidden Markov fields

    Full text link
    In this contribution, we consider the problem of the blind separation of noisy instantaneously mixed images. The images are modelized by hidden Markov fields with unknown parameters. Given the observed images, we give a Bayesian formulation and we propose to solve the resulting data augmentation problem by implementing a Monte Carlo Markov Chain (MCMC) procedure. We separate the unknown variables into two categories: 1. The parameters of interest which are the mixing matrix, the noise covariance and the parameters of the sources distributions. 2. The hidden variables which are the unobserved sources and the unobserved pixels classification labels. The proposed algorithm provides in the stationary regime samples drawn from the posterior distributions of all the variables involved in the problem leading to a flexibility in the cost function choice. We discuss and characterize some problems of non identifiability and degeneracies of the parameters likelihood and the behavior of the MCMC algorithm in this case. Finally, we show the results for both synthetic and real data to illustrate the feasibility of the proposed solution. keywords: MCMC, blind source separation, hidden Markov fields, segmentation, Bayesian approachComment: Presented at NNSP2002, IEEE workshop Neural Networks for Signal Processing XII, Sept. 2002, pp. 485--49

    Penalized maximum likelihood for multivariate Gaussian mixture

    Full text link
    In this paper, we first consider the parameter estimation of a multivariate random process distribution using multivariate Gaussian mixture law. The labels of the mixture are allowed to have a general probability law which gives the possibility to modelize a temporal structure of the process under study. We generalize the case of univariate Gaussian mixture in [Ridolfi99] to show that the likelihood is unbounded and goes to infinity when one of the covariance matrices approaches the boundary of singularity of the non negative definite matrices set. We characterize the parameter set of these singularities. As a solution to this degeneracy problem, we show that the penalization of the likelihood by an Inverse Wishart prior on covariance matrices results to a penalized or maximum a posteriori criterion which is bounded. Then, the existence of positive definite matrices optimizing this criterion can be guaranteed. We also show that with a modified EM procedure or with a Bayesian sampling scheme, we can constrain covariance matrices to belong to a particular subclass of covariance matrices. Finally, we study degeneracies in the source separation problem where the characterization of parameter singularity set is more complex. We show, however, that Inverse Wishart prior on covariance matrices eliminates the degeneracies in this case too.Comment: Presented at MaxEnt01. To appear in Bayesian Inference and Maximum Entropy Methods, B. Fry (Ed.), AIP Proceedings. 11pages, 3 Postscript figure

    Impact of Returns Time Dependency on the Estimation of Extreme Market Risk

    Get PDF
    The estimation of Value-at-Risk generally used models assuming independence. However, financial returns tend to occur in clusters with time dependency. In this paper we study the impact of negligence of returns dependency in market risk assessment. The main methods which take into account returns dependency to assess market risk are: Declustering, Extremal index and Time series-Extreme Value The- ory combination. Results shows an important reduction of the estimation error under dependency assumption. For real data, methods which take into account returns dependency have generally the best performances.Value-at-Risk, Market risk, Dependency, Declustering, Extremal index, Time Series-EVT Combination.

    Limites d'espaces tangents a une surface normale

    Get PDF

    Educating the King: The Art of Governance in Early Arab Literature

    Get PDF
    This paper questions the early Arab literary tradition of the education of kings by initially defining the notion of education in this type of literature. The conflict between power and education is then presented in terms of the opposition between the educator and the king. The architectural model that structures the conception of educating and ruling is then described, before presenting the royal virtues and functions that exemplify the ideal king. It ends with a note on the role of religion in this genre of giving advice to the royalty

    Separation of mixed hidden Markov model sources

    Get PDF
    corecore