87,909 research outputs found
Clustering of categorical variables around latent variables
In the framework of clustering, the usual aim is to cluster observations and not variables. However the issue of variable clustering clearly appears for dimension reduction, selection of variables or in some case studies (sensory analysis, biochemistry, marketing, etc.). Clustering of variables is then studied as a way to arrange variables into homogeneous clusters, thereby organizing data into meaningful structures. Once the variables are clustered into groups such that variables are similar to the other variables belonging to their cluster, the selection of a subset of variables is possible. Several specific methods have been developed for the clustering of numerical variables. However concerning categorical variables, much less methods have been proposed. In this paper we extend the criterion used by Vigneau and Qannari (2003) in their Clustering around Latent Variables approach for numerical variables to the case of categorical data. The homogeneity criterion of a cluster of categorical variables is defined as the sum of the correlation ratio between the categorical variables and a latent variable, which is in this case a numerical variable. We show that the latent variable maximizing the homogeneity of a cluster can be obtained with Multiple Correspondence Analysis. Different algorithms for the clustering of categorical variables are proposed: iterative relocation algorithm, ascendant and divisive hierarchical clustering. The proposed methodology is illustrated by a real data application to satisfaction of pleasure craft operators.clustering of categorical variables, correlation ratio, iterative relocation algorithm, hierarchical clustering
Empirical analysis of rough set categorical clustering techniques based on rough purity and value set
Clustering a set of objects into homogeneous groups is a fundamental operation
in data mining. Recently, attention has been put on categorical data clustering,
where data objects are made up of non-numerical attributes. The implementation of
several existing categorical clustering techniques is challenging as some are unable
to handle uncertainty and others have stability issues. In the process of dealing
with categorical data and handling uncertainty, the rough set theory has become
well-established mechanism in a wide variety of applications including databases.
The recent techniques such as Information-Theoretic Dependency Roughness (ITDR),
Maximum Dependency Attribute (MDA) and Maximum Significance Attribute (MSA)
outperformed their predecessor approaches like Bi-Clustering (BC), Total Roughness
(TR), Min-Min Roughness (MMR), and standard-deviation roughness (SDR). This
work explores the limitations and issues of ITDR, MDA and MSA techniques on
data sets where these techniques fails to select or faces difficulty in selecting their
best clustering attribute. Accordingly, two alternative techniques named Rough Purity
Approach (RPA) and Maximum Value Attribute (MVA) are proposed. The novelty
of both proposed approaches is that, the RPA presents a new uncertainty definition
based on purity of rough relational data base whereas, the MVA unlike other rough
set theory techniques uses the domain knowledge such as value set combined with
number of clusters (NoC). To show the significance, mathematical and theoretical
basis for proposed approaches, several propositions are illustrated. Moreover, the
recent rough categorical techniques like MDA, MSA, ITDR and classical clustering
technique like simple K-mean are used for comparison and the results are presented
in tabular and graphical forms. For experiments, data sets from previously utilized
research cases, a real supply base management (SBM) data set and UCI repository
are utilized. The results reveal significant improvement by proposed techniques for
categorical clustering in terms of purity (21%), entropy (9%), accuracy (16%), rough
accuracy (11%), iterations (99%) and time (93%).
vi
Clustering and variable selection for categorical multivariate data
This article investigates unsupervised classification techniques for
categorical multivariate data. The study employs multivariate multinomial
mixture modeling, which is a type of model particularly applicable to
multilocus genotypic data. A model selection procedure is used to
simultaneously select the number of components and the relevant variables. A
non-asymptotic oracle inequality is obtained, leading to the proposal of a new
penalized maximum likelihood criterion. The selected model proves to be
asymptotically consistent under weak assumptions on the true probability
underlying the observations. The main theoretical result obtained in this study
suggests a penalty function defined to within a multiplicative parameter. In
practice, the data-driven calibration of the penalty function is made possible
by slope heuristics. Based on simulated data, this procedure is found to
improve the performance of the selection procedure with respect to classical
criteria such as BIC and AIC. The new criterion provides an answer to the
question "Which criterion for which sample size?" Examples of real dataset
applications are also provided
Significance-Based Categorical Data Clustering
Although numerous algorithms have been proposed to solve the categorical data
clustering problem, how to access the statistical significance of a set of
categorical clusters remains unaddressed. To fulfill this void, we employ the
likelihood ratio test to derive a test statistic that can serve as a
significance-based objective function in categorical data clustering.
Consequently, a new clustering algorithm is proposed in which the
significance-based objective function is optimized via a Monte Carlo search
procedure. As a by-product, we can further calculate an empirical -value to
assess the statistical significance of a set of clusters and develop an
improved gap statistic for estimating the cluster number. Extensive
experimental studies suggest that our method is able to achieve comparable
performance to state-of-the-art categorical data clustering algorithms.
Moreover, the effectiveness of such a significance-based formulation on
statistical cluster validation and cluster number estimation is demonstrated
through comprehensive empirical results.Comment: 36 pages, 6 figure
Data Reduction Method for Categorical Data Clustering
Categorical data clustering constitutes an important part of
data mining; its relevance has recently drawn attention from several researchers.
As a step in data mining, however, clustering encounters the
problem of large amount of data to be processed. This article offers a solution for categorical clustering algorithms when working with high volumes of data by means of a method that summarizes the database. This is
done using a structure called CM-tree. In order to test our method, the KModes and Click clustering algorithms were used with several databases.
Experiments demonstrate that the proposed summarization method improves
execution time, without losing clustering quality
- …