17 research outputs found

    A New Paradigm for Development of Data Imputation Approach for Missing Value Estimation

    Get PDF
    Many real-world applications encountered a common issue in data analysis is the presence of missing data value and challenging task in many applications such as wireless sensor networks, medical applications and psychological domain and others. Learning and prediction in the presence of missing value can be treacherous in machine learning, data mining and statistical analysis. A missing value can signify important information about dataset in the mining process. Handling missing data value is a challenging task for the data mining process. In this paper, we propose new paradigm for the development of data imputation method for missing data value estimation based on centroids and the nearest neighbours. Firstly, identify clusters based on the k-means algorithm and calculate centroids and the nearest neighbour data records. Secondly, the nearest distances from complete dataset as well as incomplete dataset from the centroids and estimated the nearest data record which tends to be curse dimensionality. Finally, impute the missing value based nearest neighbour record using statistical measure called z-score. The experimental study demonstrates strengthen of the proposed paradigm for the imputation of the missing data value estimation in dataset. Tests have been run using different types of datasets in order to validate our approach and compare the results with other imputation methods such as KNNI, SVMI, WKNNI, KMI and FKNNI. The proposed approach is geared towards maximizing the utility of imputation with respect to missing data value estimation

    Finding Similar Documents Using Different Clustering Techniques

    Get PDF
    AbstractText clustering is an important application of data mining. It is concerned with grouping similar text documents together. In this paper, several models are built to cluster capstone project documents using three clustering techniques: k-means, k-means fast, and k-medoids. Our datatset is obtained from the library of the College of Computer and Information Sciences, King Saud University, Riyadh. Three similarity measure are tested: cosine similarity, Jaccard similarity, and Correlation Coefficient. The quality of the obtained models is evaluated and compared. The results indicate that the best performance is achieved using k-means and k-medoids combined with cosine similarity. We observe variation in the quality of clustering based on the evaluation measure used. In addition, as the value of k increases, the quality of the resulting cluster improves. Finally, we reveal the categories of graduation projects offered in the Information Technology department for female students

    Clustering Software Components for Program Restructuring and Component Reuse Using Hybrid XNOR Similarity Function

    Get PDF
    AbstractComponent based software development has gained a lot of practical importance in the field of software engineering from academic researchers and also from industry perspective. Finding components for efficient software reuse is one of the important problems aimed by researchers. Clustering reduces the search space of components by grouping similar entities together thus ensuring reduced time complexity as it reduces the search time for component retrieval. In this research, we instigate a generalized approach for clustering a given set of documents or software components by defining a similarity function called hybrid XNOR function to find degree of similarity between two document sets or software components. A similarity matrix is obtained for a given set of documents or components by applying hybrid XNOR function. We define and design the algorithm for component or document clustering which has the input as similarity matrix and output being set of clusters. The output is a set of highly cohesive pattern groups or components

    Document Grouping by Using Meronyms and Type-2 Fuzzy Association Rule Mining

    Get PDF
    The growth of the number of textual documents in the digital world, especially on the World Wide Web, is incredibly fast. This causes an accumulation of information, so we need efficient organization to manage textual documents. One way to accurately classify documents is using fuzzy association rules. The quality of the document clustering is affected by phase extraction of key terms and type of fuzzy logic system (FLS) used for clustering. The use of meronyms in the extraction of key terms to obtain cluster labels helps obtaining meaningful cluster labels and in addition ambiguities and uncertainties that occur in the rules of type-1 fuzzy logic systems can be overcome by using type-2 fuzzy sets. This study proposes a method of key term extraction based on meronyms with an initialization cluster using fuzzy association rule mining for document clustering. This method consists of four stages, i.e. preprocessing of the document, extraction of key terms with meronyms, extraction of candidate clusters, and cluster tree construction. Testing of this method was done with three different datasets: classic, Reuters, and 20 Newsgroup. Testing was done by comparing the overall F-measure of the method without meronyms and with meronyms. Based on the testing, the method with meronyms in the extraction of keywords produced an overall F-measure of 0.5753 for the classic dataset, 0.3984 for the Reuters dataset, and 0.6285 for the 20 Newsgroup dataset

    Document clustering based on firefly algorithm

    Get PDF
    Document clustering is widely used in Information Retrieval however, existing clustering techniques suffer from local optima problem in determining the k number of clusters.Various efforts have been put to address such drawback and this includes the utilization of swarm-based algorithms such as particle swarm optimization and Ant Colony Optimization.This study explores the adaptation of another swarm algorithm which is the Firefly Algorithm (FA) in text clustering.We present two variants of FA; Weight- based Firefly Algorithm (WFA) and Weight-based Firefly Algorithm II (WFAII).The difference between the two algorithms is that the WFAII, includes a more restricted condition in determining members of a cluster.The proposed FA methods are later evaluated using the 20Newsgroups dataset.Experimental results on the quality of clustering between the two FA variants are presented and are later compared against the one produced by particle swarm optimization, K-means and the hybrid of FA and -K-means. The obtained results demonstrated that the WFAII outperformed the WFA, PSO, K-means and FA-Kmeans. This result indicates that a better clustering can be obtained once the exploitation of a search solution is improved

    Semisupervised Community Detection by Voltage Drops

    Get PDF
    Many applications show that semisupervised community detection is one of the important topics and has attracted considerable attention in the study of complex network. In this paper, based on notion of voltage drops and discrete potential theory, a simple and fast semisupervised community detection algorithm is proposed. The label propagation through discrete potential transmission is accomplished by using voltage drops. The complexity of the proposal is OV+E for the sparse network with V vertices and E edges. The obtained voltage value of a vertex can be reflected clearly in the relationship between the vertex and community. The experimental results on four real networks and three benchmarks indicate that the proposed algorithm is effective and flexible. Furthermore, this algorithm is easily applied to graph-based machine learning methods

    Adaptive firefly algorithm for hierarchical text clustering

    Get PDF
    Text clustering is essentially used by search engines to increase the recall and precision in information retrieval. As search engine operates on Internet content that is constantly being updated, there is a need for a clustering algorithm that offers automatic grouping of items without prior knowledge on the collection. Existing clustering methods have problems in determining optimal number of clusters and producing compact clusters. In this research, an adaptive hierarchical text clustering algorithm is proposed based on Firefly Algorithm. The proposed Adaptive Firefly Algorithm (AFA) consists of three components: document clustering, cluster refining, and cluster merging. The first component introduces Weight-based Firefly Algorithm (WFA) that automatically identifies initial centers and their clusters for any given text collection. In order to refine the obtained clusters, a second algorithm, termed as Weight-based Firefly Algorithm with Relocate (WFAR), is proposed. Such an approach allows the relocation of a pre-assigned document into a newly created cluster. The third component, Weight-based Firefly Algorithm with Relocate and Merging (WFARM), aims to reduce the number of produced clusters by merging nonpure clusters into the pure ones. Experiments were conducted to compare the proposed algorithms against seven existing methods. The percentage of success in obtaining optimal number of clusters by AFA is 100% with purity and f-measure of 83% higher than the benchmarked methods. As for entropy measure, the AFA produced the lowest value (0.78) when compared to existing methods. The result indicates that Adaptive Firefly Algorithm can produce compact clusters. This research contributes to the text mining domain as hierarchical text clustering facilitates the indexing of documents and information retrieval processes

    Optimasi Suffix Tree Clustering dengan Wordnet dan Named Entity Recognition untuk Pengelompokan Dokumen

    Get PDF
    AbstrakSemakin meningkatnya jumlah dokumen teks di dunia digital mempengaruhi banyaknya jumlah informasi  dan menyebabkan kesulitan dalam proses temu kembali informasi (information retreival). Clustering dokumen merupakan suatu bidang text mining yang penting dan dapat digunakan untuk mengefisienkan dalam pengelolaan teks serta peringkasan teks. Namun beberapa permasalahan muncul dalam clustering dokumen teks terutama dalam dokumen berita seperti ambiguitas dalam content, overlapping cluster, dan struktur unik yang terdapat dalam dokumen berita. Penelitian ini mengusulkan metode baru yaitu optimasi Suffix Tree Clustering (STC) dengan WordNet dan Named Entity Recognition (NER) untuk pengelompokan dokumen. Metode ini memiliki beberapa tahap, yaitu prepocessing dokumen dengan mengekstraksi named entity serta melakukan deteksi sinonim berdasarkan WordNet. Tahap kedua adalah pembobotan term dengan tfidf dan nerfidf. Tahap ketiga adalah melakukan clustering dokumen dengan menggunakan Suffix Tree Clustering. Berdasarkan pengujian didapatkan rata-rata nilai precision sebesar 79.83%, recall 77.25%, dan f-measure78.30 %.Kata kunci: Clustering dokumen, Named Entity Recognition, Suffix Tree Clustering, WordNetAbstractThe increasingnumber oftext documentsin the internet, influence on the number of information and lead to difficulties in the process of information retrieval. Documents clustering is main field of text mining and can be used to stream line the management of text and summarization of text. However, some problems a risein documents clustering, especially in news documents such as ambiguity in the content, overlapping clusters, and theuniquestructure ofthe news thatcontained inthe document. Inthisresearch, we proposea newmethodfor documents clustering, optimization Suffix Tree Clustering (STC) with WordNet and Named Entity Recognition (NER). In this method there are several step, step one is prepocessing documents with named entity extraction and synonym detection based on WordNet. Step two is term weighting with tfidf and nerfidf. For the last step is document clustering using Suffix Tree Clustering. Based on testingwe obtained 79.83% for precision, 77.25% for recall, and78.30% for F-measureKeywords: Documents Clustering, Named Entity Recognition, Suffix Tree Clustering, WordNe
    corecore