42,123 research outputs found
Recommended from our members
Seismic data clustering management system
This is the abstract of the paper given at the conference. Copyright @ 2011 The Authors.Over the last years, seismic images have increasingly played a vital role to the study of earthquakes. The large volume of seismic data that has been accumulated has created the need to develop sophisticated systems to manage this kind of data. Seismic interpretation can play a much more active role in the evaluation of large volumes of data by providing at an early stage vital information relating to the framework of potential producing levels. [1] This work presents a novel method to manage and analyse seismic data. The data is initially turned into clustering maps using clustering techniques [2] [3] [4] [5] [6], in order to be analysed on the platform. These clustering maps can then be analysed with the friendly-user interface of Seismic 1 which is based on .Net framework architecture [7]. This feature permits the porting of the application in any Windows – based computer as also to many other Linux based environments, using the Mono project functionality [8], so it can run an application using the No-Touch Deployment [7]. The platform supports two ways of processing seismic data. Firstly, a fast multifunctional version of the classical region-growing segmentation algorithm [9], [10] is applied to various areas of interest permitting their precise definition and labelling. Moreover, this algorithm is assigned to automatically allocate new earthquakes to a particular cluster based upon the magnitude of the centre of gravity of the existing clusters; or create a new cluster if all centers of gravity are above a predefined by the user upper threshold point. Secondly, a visual technique is used to record the behaviour of a cluster of earthquakes in a designated area. In this way, the system functions as a dynamic temporal simulator which depicts sequences of earthquakes on a map [11]
A New Partitioning Around Medoids Algorithm
Kaufman & Rousseeuw (1990) proposed a clustering algorithm Partitioning Around Medoids (PAM) which maps a distance matrix into a specified number of clusters. A particularly nice property is that PAM allows clustering with respect to any specified distance metric. In addition, the medoids are robust representations of the cluster centers, which is particularly important in the common context that many elements do not belong well to any cluster. Based on our experience in clustering gene expression data, we have noticed that PAM does have problems recognizing relatively small clusters in situations where good partitions around medoids clearly exist. In this note, we propose to partition around medoids by maximizing a criteria Average Silhouette\u27\u27 defined by Kaufman & Rousseeuw. We also propose a fast-to-compute approximation of Average Silhouette\u27\u27. We implement these two new partitioning around medoids algorithms and illustrate their performance relative to existing partitioning methods in simulations
Fast k-means algorithm clustering
k-means has recently been recognized as one of the best algorithms for
clustering unsupervised data. Since k-means depends mainly on distance
calculation between all data points and the centers, the time cost will be high
when the size of the dataset is large (for example more than 500millions of
points). We propose a two stage algorithm to reduce the time cost of distance
calculation for huge datasets. The first stage is a fast distance calculation
using only a small portion of the data to produce the best possible location of
the centers. The second stage is a slow distance calculation in which the
initial centers used are taken from the first stage. The fast and slow stages
represent the speed of the movement of the centers. In the slow stage, the
whole dataset can be used to get the exact location of the centers. The time
cost of the distance calculation for the fast stage is very low due to the
small size of the training data chosen. The time cost of the distance
calculation for the slow stage is also minimized due to small number of
iterations. Different initial locations of the clusters have been used during
the test of the proposed algorithms. For large datasets, experiments show that
the 2-stage clustering method achieves better speed-up (1-9 times).Comment: 16 pages, Wimo2011; International Journal of Computer Networks &
Communications (IJCNC) Vol.3, No.4, July 201
Fast Approximate -Means via Cluster Closures
-means, a simple and effective clustering algorithm, is one of the most
widely used algorithms in multimedia and computer vision community. Traditional
-means is an iterative algorithm---in each iteration new cluster centers are
computed and each data point is re-assigned to its nearest center. The cluster
re-assignment step becomes prohibitively expensive when the number of data
points and cluster centers are large.
In this paper, we propose a novel approximate -means algorithm to greatly
reduce the computational complexity in the assignment step. Our approach is
motivated by the observation that most active points changing their cluster
assignments at each iteration are located on or near cluster boundaries. The
idea is to efficiently identify those active points by pre-assembling the data
into groups of neighboring points using multiple random spatial partition
trees, and to use the neighborhood information to construct a closure for each
cluster, in such a way only a small number of cluster candidates need to be
considered when assigning a data point to its nearest cluster. Using complexity
analysis, image data clustering, and applications to image retrieval, we show
that our approach out-performs state-of-the-art approximate -means
algorithms in terms of clustering quality and efficiency
BigFCM: Fast, Precise and Scalable FCM on Hadoop
Clustering plays an important role in mining big data both as a modeling
technique and a preprocessing step in many data mining process implementations.
Fuzzy clustering provides more flexibility than non-fuzzy methods by allowing
each data record to belong to more than one cluster to some degree. However, a
serious challenge in fuzzy clustering is the lack of scalability. Massive
datasets in emerging fields such as geosciences, biology and networking do
require parallel and distributed computations with high performance to solve
real-world problems. Although some clustering methods are already improved to
execute on big data platforms, but their execution time is highly increased for
large datasets. In this paper, a scalable Fuzzy C-Means (FCM) clustering named
BigFCM is proposed and designed for the Hadoop distributed data platform. Based
on the map-reduce programming model, it exploits several mechanisms including
an efficient caching design to achieve several orders of magnitude reduction in
execution time. Extensive evaluation over multi-gigabyte datasets shows that
BigFCM is scalable while it preserves the quality of clustering
Fast Color Quantization Using Weighted Sort-Means Clustering
Color quantization is an important operation with numerous applications in
graphics and image processing. Most quantization methods are essentially based
on data clustering algorithms. However, despite its popularity as a general
purpose clustering algorithm, k-means has not received much respect in the
color quantization literature because of its high computational requirements
and sensitivity to initialization. In this paper, a fast color quantization
method based on k-means is presented. The method involves several modifications
to the conventional (batch) k-means algorithm including data reduction, sample
weighting, and the use of triangle inequality to speed up the nearest neighbor
search. Experiments on a diverse set of images demonstrate that, with the
proposed modifications, k-means becomes very competitive with state-of-the-art
color quantization methods in terms of both effectiveness and efficiency.Comment: 30 pages, 2 figures, 4 table
- …