8,758 research outputs found

    Video data compression using artificial neural network differential vector quantization

    Get PDF
    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes

    Image compression using a stochastic competitive learning algorithm (scola)

    Get PDF
    We introduce a new stochastic competitive learning algorithm (SCoLA) and apply it to vector quantization for image compression. In competitive learning, the training process involves presenting, simultaneously, an input vector to each of the competing neurons, which then compare the input vector to their own weight vectors and one of them is declared the winner based on some deterministic distortion measure. Here a stochastic criterion is used for selecting the winning neuron, whose weights are then updated to become more like the input vector. The performance of the new algorithm is compared to that of frequency-sensitive competitive learning (FSCL); it was found that SCoLA achieves higher peak signal-to-noise ratios (PSNR) than FSC

    Competitive learning/reflected residual vector quantization for coding angiogram images

    Get PDF
    Medical images need to be compressed for the purpose of storage/transmission of a large volume of medical data. Reflected residual vector quantization (RRVQ) has emerged recently as one of the computationally cheap compression algorithms. RRVQ, which is a lossy compression scheme, was introduced as an alternative design algorithm for residual vector quantization (RVQ) structure (a structure famous for providing progressive quantization). However, RRVQ is not guaranteed to reach global minimum. It was found that it has a higher probability to diverge when used with nonGaussian and nonLaplacian image sources such as angiogram images. By employing competitive learning neural network in the codebook design process, we tried to obtain a stable and convergent algorithm. This paper deals with employing competitive learning neural network in RRVQ design algorithm that results in competitive learning RRVQ algorithm for the RVQ structure. Simulation results indicate that the new proposed algorithm is indeed convergent with high probability and provides peak signal-to-noise ratio (PSNR) of approximately 32 dB for an-giogram images at an average encoding bit rate of 0.25 bits per pixel

    S-TREE: Self-Organizing Trees for Data Clustering and Online Vector Quantization

    Full text link
    This paper introduces S-TREE (Self-Organizing Tree), a family of models that use unsupervised learning to construct hierarchical representations of data and online tree-structured vector quantizers. The S-TREE1 model, which features a new tree-building algorithm, can be implemented with various cost functions. An alternative implementation, S-TREE2, which uses a new double-path search procedure, is also developed. S-TREE2 implements an online procedure that approximates an optimal (unstructured) clustering solution while imposing a tree-structure constraint. The performance of the S-TREE algorithms is illustrated with data clustering and vector quantization examples, including a Gauss-Markov source benchmark and an image compression application. S-TREE performance on these tasks is compared with the standard tree-structured vector quantizer (TSVQ) and the generalized Lloyd algorithm (GLA). The image reconstruction quality with S-TREE2 approaches that of GLA while taking less than 10% of computer time. S-TREE1 and S-TREE2 also compare favorably with the standard TSVQ in both the time needed to create the codebook and the quality of image reconstruction.Office of Naval Research (N00014-95-10409, N00014-95-0G57

    On the use of self-organizing maps to accelerate vector quantization

    Full text link
    Self-organizing maps (SOM) are widely used for their topology preservation property: neighboring input vectors are quantified (or classified) either on the same location or on neighbor ones on a predefined grid. SOM are also widely used for their more classical vector quantization property. We show in this paper that using SOM instead of the more classical Simple Competitive Learning (SCL) algorithm drastically increases the speed of convergence of the vector quantization process. This fact is demonstrated through extensive simulations on artificial and real examples, with specific SOM (fixed and decreasing neighborhoods) and SCL algorithms.Comment: A la suite de la conference ESANN 199

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure
    corecore