4,338 research outputs found

    Latent class model with conditional dependency per modes to cluster categorical data

    Get PDF
    International audienceWe propose a parsimonious extension of the classical latent class model to cluster categorical data by relaxing the class conditional independence assumption. Under this new mixture model, named Conditional Modes Model, variables are grouped into conditionally independent blocks. The corresponding block distribution is a parsimonious multinomial distribution where the few free parameters correspond to the most likely modality crossings, while the remaining probability mass is uniformly spread over the other modality crossings. Thus, the proposed model allows to bring out the intra-class dependency between variables and to summarize each class by a few characteristic modality crossings. The model selection is performed via a Metropolis-within-Gibbs sampler to overcome the computational intractability of the block structure search. As this approach involves the computation of the integrated complete-data likelihood, we propose a new method (exact for the continuous parameters and approximated for the discrete ones) which avoids the biases of the \textsc{bic} criterion pointed out by our experiments. Finally, the parameters are only estimated for the best model via a MCMC algorithm. The characteristics of the new model are illustrated on simulated data and on two biological data sets. These results strengthen the idea that this simple model allows to reduce biases involved by the conditional independence assumption and gives meaningful parameters. Both applications were performed with the R package CoMode

    Automatic Bayesian Density Analysis

    Full text link
    Making sense of a dataset in an automatic and unsupervised fashion is a challenging problem in statistics and AI. Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference. As a result, supervision from statisticians is usually needed to find the right model for the data. However, since domain experts are not necessarily also experts in statistics, we propose Automatic Bayesian Density Analysis (ABDA) to make exploratory data analysis accessible at large. Specifically, ABDA allows for automatic and efficient missing value estimation, statistical data type and likelihood discovery, anomaly detection and dependency structure mining, on top of providing accurate density estimation. Extensive empirical evidence shows that ABDA is a suitable tool for automatic exploratory analysis of mixed continuous and discrete tabular data.Comment: In proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19
    • …
    corecore