22 research outputs found

    Cellular Self-Organising Maps - CSOM

    Get PDF
    International audienceThis paper presents CSOM, a Cellular Self-Organising Map which performs weight update in a cellular manner. Instead of updating weights towards new input vectors, it uses a signal propagation originated from the best matching unit to every other neuron in the network. Interactions between neurons are thus local and distributed. In this paper we present performance results showing than CSOM can obtain faster and better quantisation than classical SOM when used on high-dimensional vectors. We also present an application on video compression based on vector quantisation, in which CSOM outperforms SOM

    Novelty detection with self-organizing maps for autonomous extraction of salient tracking features

    Get PDF
    International audienceIn the image processing field, many tracking algorithms rely on prior knowledge like color, shape or even need a database of the objects to be tracked. This may be a problem for some real world applications that cannot fill those prerequisite. Based on image compression techniques, we propose to use Self-Organizing Maps to robustly detect novelty in the input video stream and to produce a saliency map which will outline unusual objects in the visual environment. This saliency map is then processed by a Dynamic Neural Field to extract a robust and continuous tracking of the position of the object. Our approach is solely based on unsupervised neural networks and does not need any prior knowledge, therefore it has a high adaptability to different inputs and a strong robustness to noisy environments

    Image compression by self-organized Kohonen map

    No full text

    Systems Analysis Modelling Simulation

    No full text
    this article to use the second property of Kohonen's neural network, self-organization, in order to achieve higher compression rates without any further decrease in the image quality. First, we describe each step of the compression scheme, and in particular the differential coding, which is a coding technique that allows to make the ordering (self-organization) property very efficient. Then, results of simulations are shown for image compression with this new method, including comparison with other compression schemes. Both lossy and lossless compression are considere

    Self-organizing Maps and Ancient Documents

    No full text

    Self-Organizing Maps and Ancient Documents

    No full text
    Colloque avec actes et comité de lecture. internationale.International audienceThis paper presents how Self-Organizing Maps and especially Kohonen maps can be applied to digital images of ancient collections in the perspective of valorization and diffusion. As an illustration, a scheme of transparency reduction of the digitized Gutenberg Bible is presented. In this two steps method, the Kohonen map is trained to generate a set of test vectors that will train in a supervised manner a classical feed-forward network. The testing step consists then in classifying each pixel into one class out of four by feeding directly the feed forward network. The pixels belonging to the transparency class are then removed

    A fast algorithm to find Best Matching Units in Self-Organizing Maps

    No full text
    International audienceSelf-Organizing Maps (SOM) are well-known unsupervised neural networks able to perform vector quantization while mapping an underlying regular neighbourhood structure onto the codebook. They are used in a wide range of applications. As with most properly trained neural networks models, increasing the number of neurons in a SOM leads to better results or new emerging properties. Therefore highly efficient algorithms for learning and evaluation are key to improve the performance of such models. In this paper, we propose a faster alternative to compute the Winner Takes All component of SOM that scales better with a large number of neurons. We present our algorithm to find the so-called best matching unit (BMU) in a SOM, and we theoretically analyze its computational complexity. Statistical results on various synthetic and real-world datasets confirm this analysis and show an even more significant improvement in computing time with a minimal degradation of performance. With our method, we explore a new approach for optimizing SOM that can be combined with other optimization methods commonly used in these models for an even faster computation in both learning and recall phases
    corecore