4,743 research outputs found
Image segmentation using fuzzy LVQ clustering networks
In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation
Forecasting the CATS benchmark with the Double Vector Quantization method
The Double Vector Quantization method, a long-term forecasting method based
on the SOM algorithm, has been used to predict the 100 missing values of the
CATS competition data set. An analysis of the proposed time series is provided
to estimate the dimension of the auto-regressive part of this nonlinear
auto-regressive forecasting method. Based on this analysis experimental results
using the Double Vector Quantization (DVQ) method are presented and discussed.
As one of the features of the DVQ method is its ability to predict scalars as
well as vectors of values, the number of iterative predictions needed to reach
the prediction horizon is further observed. The method stability for the long
term allows obtaining reliable values for a rather long-term forecasting
horizon.Comment: Accepted for publication in Neurocomputing, Elsevie
On the use of self-organizing maps to accelerate vector quantization
Self-organizing maps (SOM) are widely used for their topology preservation
property: neighboring input vectors are quantified (or classified) either on
the same location or on neighbor ones on a predefined grid. SOM are also widely
used for their more classical vector quantization property. We show in this
paper that using SOM instead of the more classical Simple Competitive Learning
(SCL) algorithm drastically increases the speed of convergence of the vector
quantization process. This fact is demonstrated through extensive simulations
on artificial and real examples, with specific SOM (fixed and decreasing
neighborhoods) and SCL algorithms.Comment: A la suite de la conference ESANN 199
Two generalizations of Kohonen clustering
The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ
Conditional quantile estimation through optimal quantization
In this paper, we use quantization to construct a nonparametric estimator of
conditional quantiles of a scalar response given a d-dimensional vector of
covariates . First we focus on the population level and show how optimal
quantization of , which consists in discretizing by projecting it on an
appropriate grid of points, allows to approximate conditional quantiles of
given . We show that this is approximation is arbitrarily good as
goes to infinity and provide a rate of convergence for the approximation error.
Then we turn to the sample case and define an estimator of conditional
quantiles based on quantization ideas. We prove that this estimator is
consistent for its fixed- population counterpart. The results are
illustrated on a numerical example. Dominance of our estimators over local
constant/linear ones and nearest neighbor ones is demonstrated through
extensive simulations in the companion paper Charlier et al.(2014b)
- …