Clustering is a widely used method in extracting useful information from gene expression data, where unknown\ud correlation structures in genes are believed to persist even after normalization. Such correlation structures pose a great\ud challenge on the conventional clustering methods, such as the Gaussian mixture (GM) model, k-means (KM), and partitioning\ud around medoids (PAM), which are not robust against general dependence within data. Here we use the exponential\ud power mixture model to increase the robustness of clustering against general dependence and nonnormality of the data. An\ud expectation–conditional maximization algorithm is developed to calculate the maximum likelihood estimators (MLEs) of the\ud unknown parameters in these mixtures. The Bayesian information criterion is then employed to determine the numbers of\ud components of the mixture. The MLEs are shown to be consistent under sparse dependence. Our numerical results indicate\ud that the proposed procedure outperforms GM, KM, and PAM when there are strong correlations or non-Gaussian components\ud in the data
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.