9,599 research outputs found
Certainty of outlier and boundary points processing in data mining
Data certainty is one of the issues in the real-world applications which is
caused by unwanted noise in data. Recently, more attentions have been paid to
overcome this problem. We proposed a new method based on neutrosophic set (NS)
theory to detect boundary and outlier points as challenging points in
clustering methods. Generally, firstly, a certainty value is assigned to data
points based on the proposed definition in NS. Then, certainty set is presented
for the proposed cost function in NS domain by considering a set of main
clusters and noise cluster. After that, the proposed cost function is minimized
by gradient descent method. Data points are clustered based on their membership
degrees. Outlier points are assigned to noise cluster and boundary points are
assigned to main clusters with almost same membership degrees. To show the
effectiveness of the proposed method, two types of datasets including 3
datasets in Scatter type and 4 datasets in UCI type are used. Results
demonstrate that the proposed cost function handles boundary and outlier points
with more accurate membership degrees and outperforms existing state of the art
clustering methods.Comment: Conference Paper, 6 page
A survey of kernel and spectral methods for clustering
Clustering algorithms are a useful tool to explore data structures and have been employed in many disciplines. The focus of this paper is the partitioning clustering problem with a special interest in two recent approaches: kernel and spectral methods. The aim of this paper is to present a survey of kernel and spectral clustering methods, two approaches able to produce nonlinear separating hypersurfaces between clusters. The presented kernel clustering methods are the kernel version of many classical clustering algorithms, e.g., K-means, SOM and neural gas. Spectral clustering arise from concepts in spectral graph theory and the clustering problem is configured as a graph cut problem where an appropriate objective function has to be optimized. An explicit proof of the fact that these two paradigms have the same objective is reported since it has been proven that these two seemingly different approaches have the same mathematical foundation. Besides, fuzzy kernel clustering methods are presented as extensions of kernel K-means clustering algorithm. (C) 2007 Pattem Recognition Society. Published by Elsevier Ltd. All rights reserved
Performance characterization of clustering algorithms for colour image segmentation
This paper details the implementation of three
traditional clustering techniques (K-Means clustering, Fuzzy C-Means clustering and Adaptive K-Means clustering) that are applied to extract the colour information that is used in the image segmentation process. The aim of this paper is to evaluate the performance of the analysed colour clustering techniques for the extraction of optimal features from colour spaces and investigate which method returns the most consistent results when applied on a large suite of mosaic images
Gray Image extraction using Fuzzy Logic
Fuzzy systems concern fundamental methodology to represent and process
uncertainty and imprecision in the linguistic information. The fuzzy systems
that use fuzzy rules to represent the domain knowledge of the problem are known
as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and
subsequent extraction from a noise-affected background, with the help of
various soft computing methods, are relatively new and quite popular due to
various reasons. These methods include various Artificial Neural Network (ANN)
models (primarily supervised in nature), Genetic Algorithm (GA) based
techniques, intensity histogram based methods etc. providing an extraction
solution working in unsupervised mode happens to be even more interesting
problem. Literature suggests that effort in this respect appears to be quite
rudimentary. In the present article, we propose a fuzzy rule guided novel
technique that is functional devoid of any external intervention during
execution. Experimental results suggest that this approach is an efficient one
in comparison to different other techniques extensively addressed in
literature. In order to justify the supremacy of performance of our proposed
technique in respect of its competitors, we take recourse to effective metrics
like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise
Ratio (PSNR).Comment: 8 pages, 5 figures, Fuzzy Rule Base, Image Extraction, Fuzzy
Inference System (FIS), Membership Functions, Membership values,Image coding
and Processing, Soft Computing, Computer Vision Accepted and published in
IEEE. arXiv admin note: text overlap with arXiv:1206.363
Semi-supervised cross-entropy clustering with information bottleneck constraint
In this paper, we propose a semi-supervised clustering method, CEC-IB, that
models data with a set of Gaussian distributions and that retrieves clusters
based on a partial labeling provided by the user (partition-level side
information). By combining the ideas from cross-entropy clustering (CEC) with
those from the information bottleneck method (IB), our method trades between
three conflicting goals: the accuracy with which the data set is modeled, the
simplicity of the model, and the consistency of the clustering with side
information. Experiments demonstrate that CEC-IB has a performance comparable
to Gaussian mixture models (GMM) in a classical semi-supervised scenario, but
is faster, more robust to noisy labels, automatically determines the optimal
number of clusters, and performs well when not all classes are present in the
side information. Moreover, in contrast to other semi-supervised models, it can
be successfully applied in discovering natural subgroups if the partition-level
side information is derived from the top levels of a hierarchical clustering
- ā¦