198 research outputs found

    Two generalizations of Kohonen clustering

    Get PDF
    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ

    AUTOMATIC ARRHYTHMIAS DETECTION USING VARIOUS TYPES OF ARTIFICIAL NEURAL NETWORK BASED LEARNING VECTOR QUANTIZATION (LVQ)

    Get PDF
    Abstract An automatic Arrythmias detection system is urgently required due to small number of cardiologits in Indonesia. This paper discusses only about the study and implementation of the system. We use several kinds of signal processing methods to recognize arrythmias from ecg signal. The core of the system is classification. Our LVQ based artificial neural network classifiers based on LVQ, which includes LVQ1, LVQ2, LVQ2.1, FNLVQ, FNLVQ MSA, FNLVQ-PSO, GLVQ and FNGLVQ. Experiment result show that for non round robin dataset, the system could reach accuracy of 94.07%, 92.54%, 88.09% , 86.55% , 83.66%, 82.29 %, 82.25%, and 74.62% respectively for FNGLVQ, FNLVQ-PSO, GLVQ, LVQ2.1, FNLVQ-MSA, LVQ2, FNLVQ and LVQ1. Whereas for round robin dataset, system reached accuracy of 98.12%, 98.04%, 94.31%, 90.43%, 86.75%, 86.12 %, 84.50%, and 74.78% respectively for GLVQ, LVQ2.1, FNGLVQ, FNLVQ-PSO, LVQ2, FNLVQ-MSA, FNLVQ and LVQ1

    Recognition and Classification of Ancient Dwellings based on Elastic Grid and GLCM

    Get PDF
    Rectangle algorithm is designed to extract ancient dwellings from village satellite images according to their pixel features and shape features. For these unrecognized objects, we need to distinguish them by further extracting texture features of them. In order to get standardized sample, three pre-process operations including rotating operation, scaling operation, and clipping operation are designed to unify their sizes and directions

    Differentiable Kernels in Generalized Matrix Learning Vector Quantization

    Get PDF
    In the present paper we investigate the application of differentiable kernel for generalized matrix learning vector quantization as an alternative kernel-based classifier, which additionally provides classification dependent data visualization. We show that the concept of differentiable kernels allows a prototype description in the data space but equipped with the kernel metric. Moreover, using the visualization properties of the original matrix learning vector quantization we are able to optimize the class visualization by inherent visualization mapping learning also in this new kernel-metric data space

    Dynamic Optimal Training for Competitive Neural Networks

    Get PDF
    This paper introduces an unsupervised learning algorithm for optimal training of competitive neural networks. The learning rule of this algorithm is rived from the minimization of a new objective criterion using the gradient descent technique. Its learning rate and competition difficulty are dynamically adjusted throughout iterations. Numerical results that illustrate the performance of this algorithm in unsupervised pattern classification and image compression are also presented, discussed, and compared to those provided by other well-known algorithms for several examples of real test data
    corecore