95 research outputs found

    Fuzzy clustering with balance constraint

    Get PDF
    We study equality in fuzzy clustering algorithms where an equality constraint is added to the existing model. Equality is being used in various areas, such as districting (either zonal or political), industries (distribution companies). We focus on wireless sensor networks problem. Existing protocols do not pay too much attention to the cluster head selection step and equality of workload of the clusters. These two issues have significant e ect on the consumption of energy in a network where increasing lifetime of network is critical. A solution approach based on the Lagrangean relaxation is developed. The proposed algorithm is compared with the popular LEACH protocol. Results show that in the same simulated environment, our algorithm works better

    A hybrid interval type-2 semi-supervised possibilistic fuzzy c-means clustering and particle swarm optimization for satellite image analysis

    Get PDF
    Although satellite images can provide more information about the earth’s surface in a relatively short time and over a large scale, they are affected by observation conditions and the accuracy of the image acquisition equipment. The objects on the images are often not clear and uncertain, especially at their borders. The type-1 fuzzy set based fuzzy clustering technique allows each data pattern to belong to many different clusters through membership function (MF) values, which can handle data patterns with unclear and uncertain boundaries well. However, this technique is quite sensitive to noise, outliers, and limitations in handling uncertainties. To overcome these disadvantages, we propose a hybrid method encompassing interval type-2 semi-supervised possibilistic fuzzy c-means clustering (IT2SPFCM) and Particle Swarm Optimization (PSO) to form the proposed IT2SPFCM-PSO. We experimented on some satellite images to prove the effectiveness of the proposed method. Experimental results show that the IT2SPFCM-PSO algorithm gives accuracy from 98.8% to 99.39% and is higher than that of other matching algorithms including SFCM, SMKFCM, SIIT2FCM, PFCM, SPFCM-W, SPFCM-SS, and IT2SPFCM. Analysis of the results by indicators PC-I, CE-I, D-I, XB-I, t -I, and MSE also showed that the proposed method gives better results in most experiments

    Robust techniques and applications in fuzzy clustering

    Get PDF
    This dissertation addresses issues central to frizzy classification. The issue of sensitivity to noise and outliers of least squares minimization based clustering techniques, such as Fuzzy c-Means (FCM) and its variants is addressed. In this work, two novel and robust clustering schemes are presented and analyzed in detail. They approach the problem of robustness from different perspectives. The first scheme scales down the FCM memberships of data points based on the distance of the points from the cluster centers. Scaling done on outliers reduces their membership in true clusters. This scheme, known as the Mega-clustering, defines a conceptual mega-cluster which is a collective cluster of all data points but views outliers and good points differently (as opposed to the concept of Dave\u27s Noise cluster). The scheme is presented and validated with experiments and similarities with Noise Clustering (NC) are also presented. The other scheme is based on the feasible solution algorithm that implements the Least Trimmed Squares (LTS) estimator. The LTS estimator is known to be resistant to noise and has a high breakdown point. The feasible solution approach also guarantees convergence of the solution set to a global optima. Experiments show the practicability of the proposed schemes in terms of computational requirements and in the attractiveness of their simplistic frameworks. The issue of validation of clustering results has often received less attention than clustering itself. Fuzzy and non-fuzzy cluster validation schemes are reviewed and a novel methodology for cluster validity using a test for random position hypothesis is developed. The random position hypothesis is tested against an alternative clustered hypothesis on every cluster produced by the partitioning algorithm. The Hopkins statistic is used as a basis to accept or reject the random position hypothesis, which is also the null hypothesis in this case. The Hopkins statistic is known to be a fair estimator of randomness in a data set. The concept is borrowed from the clustering tendency domain and its applicability to validating clusters is shown here. A unique feature selection procedure for use with large molecular conformational datasets with high dimensionality is also developed. The intelligent feature extraction scheme not only helps in reducing dimensionality of the feature space but also helps in eliminating contentious issues such as the ones associated with labeling of symmetric atoms in the molecule. The feature vector is converted to a proximity matrix, and is used as an input to the relational fuzzy clustering (FRC) algorithm with very promising results. Results are also validated using several cluster validity measures from literature. Another application of fuzzy clustering considered here is image segmentation. Image analysis on extremely noisy images is carried out as a precursor to the development of an automated real time condition state monitoring system for underground pipelines. A two-stage FCM with intelligent feature selection is implemented as the segmentation procedure and results on a test image are presented. A conceptual framework for automated condition state assessment is also developed

    Pembuktian Ukuran Kuantum dan Ukuran Possibility Sebagai Perumuman Ukuran yang Tidak Saling Memperumum

    Get PDF
    Sejak Planck dan Zadeh masing-masing mengkaji teori kuantum dan teori possibility, kajian kedua teori ini terus dilakukan hingga sekarang. Dari sisi matematika, kedua teori ini yang berkaitan langsung dan menjadi dasar dalam berbagai kajian baik teoritis maupun aplikatif adalah ukuran kuantum dan ukuran possibility. Meskipun dalam banyak literatur ukuran kuantum dan ukuran possibility merupakan perumuman ukuran tetapi tidak dibuktikan berdasarkan definisi sehingga tidak nampak secara langsung substansi perumuman tersebut. Selain itu, dalam berbagai literatur juga tidak ditemukan pembahasan keterkaitan antara ukuran kuantum dan ukuran possibility. Oleh karena itu, pada penelitian ini dilakukan pembuktian berdasarkan definisi baik ukuran kuantum dan ukuran possibility merupakan perumuman ukuran maupun ukuran kuantum dan  ukuran possibility tidak saling memperumum sehingga ukuran merupakan irisan keduanya

    Reconstructability Theory for General Systems and Its Application to Automated Rule Learning.

    Get PDF
    The two problems in reconstructability analysis, abbreviated as RA, are referred to as the reconstructability problem and the identification problem. The former relates to the process of reconstructing a given system under a given criterion from the knowledge of its subsystems and, during this process, identifying those subsystems that are important in the reconstruction. The latter allows the identification of an unknown system from the knowledge of its subsystems. The advent of RA has intensified the research efforts on system studies. The objective of this research is to study the process of system reconstruction for general systems and apply it in the context of automated knowledge acquisition from databases. First, we describe basic concepts in reconstructability theory and machine learning. We then modify existing results in reconstructability theory for probabilistic and selection systems in order to generate better algorithms for determining the unbiased reconstruction and reconstruction families in the wake of new developments such as k-systems and the use of independent information. Further, we extend RA methodology for possibilistic systems using only partial information. An algorithm is proposed to compute the unbiased reconstruction, and the reconstruction families are identified as a set of max-min fuzzy relation equations. Furthermore, we define a new measure of the cognitive contents of a rule, referred to as the K-measure. Based on the K-measure, we introduce a new approach for automated knowledge acquisition from databases. Based on RA, the reconstructability approach to generalized rule induction from databases should work for most data covered by the framework of RA and k-systems. In particular, this approach is appropriate for expert-systems-like domains where the data is intrinsically nominal. Finally, we summarize our results and discuss the potentials for further research

    Building environmentally-aware classifiers on streaming data

    Get PDF
    The three biggest challenges currently faced in machine learning, in our estimation, are the staggering quantity of data we wish to analyze, the incredibly small proportion of these data that are labeled, and the apparent lack of interest in creating algorithms that continually learn during inference. An unsupervised streaming approach addresses all three of these challenges, storing only a finite amount of information to model an unbounded dataset and adapting to new structures as they arise. Specifically, we are motivated by automated target recognition (ATR) in synthetic aperture sonar (SAS) imagery, the problem of finding explosive hazards on the sea oor. It has been shown that the performance of ATR can be improved by, instead of using a single classifier for the entire ATR task, creating several specialized classifers and fusing their predictions [44]. The prevailing opinion seems be that one should have different classifiers for varying complexity of sea oor [74], but we hypothesize that fusing classifiers based on sea bottom type will yield higher accuracy and better lend itself to making explainable classification decisions. The first step of building such a system is developing a robust framework for online texture classification, the topic of this research. xi In this work, we improve upon StreamSoNG [85], an existing algorithm for streaming data analysis (SDA) that models each structure in the data with a neural gas [69] and detects new structures by clustering an outlier list with the possibilistic 1-means [62] (P1M) algorithm. We call the modified algorithm StreamSoNGv2, denoting that it is the second version, or verse, if you will, of StreamSoNG. Notable improvements include detection of arbitrarily-shaped clusters by using DBSCAN [37] instead of P1M, using growing neural gas [43] to model each structure with an adaptive number of prototypes, and an automated approach to estimate the n parameters. Furthermore, we propose a novel algorithm called single-pass possibilistic clustering (SPC) for solving the same task. SPC maintains a fixed number of structures to model the data stream. These structures can be updated and merged based only on their "footprints", that is, summary statistics that contain all of the information from the stream needed by the algorithm without directly maintaining the entire stream. SPC is built on a damped window framework, allowing the user to balance the weight between old and new points in the stream with a decay factor parameter. We evaluate the two algorithms under consideration against four state of the art SDA algorithms from the literature on several synthetic datasets and two texture datasets: one real (KTH-TIPS2b [68]) and xii one simulated. The simulated dataset, a significant research effort in itself, is of our own construction in Unreal Engine and contains on the order of 6,000 images at 720 x 720 resolution from six different texture types. Our hope is that the methodology developed here will be effective texture classifiers for use not only in underwater scene understanding, but also in improving performance of ATR algorithms by providing a context in which the potential target is embedded.Includes bibliographical references
    • …
    corecore