4 research outputs found

    Non-convex Regularized Self-representation for Unsupervised Feature Selection

    No full text
    5th International Conference, IScIDE 2015, Suzhou, China, June 14-16, 2015Feature selection aims to select a subset of features to decrease time complexity, reduce storage burden and improve the generalization ability of classification or clustering. For the countless unlabeled high dimensional data, unsupervised feature selection is effective in alleviating the curse of dimension-ality and can find applications in various fields. In this paper, we propose a non-convex regularized self-representation (RSR) model where features can be represented by a linear combination of other features, and propose to impose L2,p norm (0 < p < 1) regularization on self-representation coefficients for unsupervised feature selection. Compared with the conventional L2, 1 norm regularization, when p < 1, much sparser solution is obtained on the self-representation coefficients, and it is also more effective in selecting salient features. To solve the non-convex RSR model, we further propose an efficient iterative reweighted least squares (IRLS) algorithm with guaranteed convergence to fixed point. Extensive experimental results on nine datasets show that our feature selection method with small p is more effective. It mostly outperforms features selected at p = 1 and other state-of-the-art unsupervised feature selection methods in terms of classification accuracy and clustering result.Department of Computin

    Using Feature Weighting as a Tool for Clustering Applications

    Get PDF
    The weighted variant of k-Means (Wk-Means), which assigns values to features based on their relevance, is a well-known approach to address the shortcoming of k-Means with data containing noisy and irrelevant features. This research aims first to explore how feature weighting can be used for feature selection, second to investigate the performance of Minkowski weighted k- Means (MWk-Means), and its intelligent variant, on datasets defined in different p-norms, and third to address the problem of missing values with a weighted variant of k-Means. A partial distance approach is used to address the problem of missing values for weighted variant of k- Means. Anomalous clustering has been successfully used to detect natural clusters and initialize centroids in k-means type algorithms. Similarly, extensive work has been carried out on using feature weights to rescale features under Minkowski Lp metrics for p ≥ 1 . In this thesis, aspects from both of these approaches enable feature weights to be detected based on natural clusters present in the training data, but the clusters are not limited to spherical shape. Two methods, mean-FSFW and max-FSFW, are developed as further extensions of intelligent Minkowski Weighted k-Means(iMWk-Means), where feature weights are used as indices for feature selection with no requirement for user-specified parameters. The proposed feature selection methods are able to significantly reduce the number of noisy features. These methods are further extended to mean-FSFWextPD and max-FSFWextPD to address missing values and are found to be better alternatives than existing imputation methods. The effect of feature weighting on clustering of dataset defined in varying p-norms is further explored in the thesis. An algorithm that translates a dataset into different p-norms has been proposed. The capability of MWk-Means to read true shapes of clusters defined in different p- norms is explored. To address the problem of missing feature values in weighted variant of k-Means, different missing-value imputation methods are tested. The MWk-Means and its intelligent variant are further extended to incorporate the partial distance approach, specifically to address the problem of missing values. All these methods are tested in both synthetic and real-world datasets against three models of noise - noisy feature added, feature blurring and cluster-wise feature blurring - where applicable. These noises are generated from Gaussian and uniform distribution with three different strength of noise, i.e., no noise, half noise and full noise Overall, results demonstrate that feature weighting can improve feature selection. The partial- distance approach, with feature weights, is effective at ignoring missing values, and cluster retrieval in various p-norm spaces is effective
    corecore