8 research outputs found

    Video data compression using artificial neural network differential vector quantization

    Get PDF
    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes

    A class of competitive learning models which avoids neuron underutilization problem

    Full text link

    Deep Neural Networks for End-to-End Optimized Speech Coding

    Get PDF
    Modern compression algorithms are the result of years of research; industry standards such as MP3, JPEG, and G.722.1 required complex hand-engineered compression pipelines, often with much manual tuning involved on the part of the engineers who created them. Recently, deep neural networks have shown a sophisticated ability to learn directly from data, achieving incredible success over traditional hand-engineered features in many areas. Our aim is to extend these "deep learning" methods into the domain of compression. We present a novel deep neural network model and train it to optimize all the steps of a wideband speech-coding pipeline (compression, quantization, entropy coding, and decompression) end-to-end directly from raw speech data, no manual feature engineering necessary. In testing, our learned speech coder performs on par with or better than current standards at a variety of bitrates (~9kbps up to ~24kbps). It also runs in realtime on an Intel i7-4790K CPU

    Image compression techniques using vector quantization

    Get PDF

    Automatic facial recognition based on facial feature analysis

    Get PDF

    On the Synthesis of fuzzy neural systems.

    Get PDF
    by Chung, Fu Lai.Thesis (Ph.D.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 166-174).ACKNOWLEDGEMENT --- p.iiiABSTRACT --- p.ivChapter 1. --- Introduction --- p.1Chapter 1.1 --- Integration of Fuzzy Systems and Neural Networks --- p.1Chapter 1.2 --- Objectives of the Research --- p.7Chapter 1.2.1 --- Fuzzification of Competitive Learning Algorithms --- p.7Chapter 1.2.2 --- Capacity Analysis of FAM and FRNS Models --- p.8Chapter 1.2.3 --- Structure and Parameter Identifications of FRNS --- p.9Chapter 1.3 --- Outline of the Thesis --- p.9Chapter 2. --- A Fuzzy System Primer --- p.11Chapter 2.1 --- Basic Concepts of Fuzzy Sets --- p.11Chapter 2.2 --- Fuzzy Set-Theoretic Operators --- p.15Chapter 2.3 --- "Linguistic Variable, Fuzzy Rule and Fuzzy Inference" --- p.19Chapter 2.4 --- Basic Structure of a Fuzzy System --- p.22Chapter 2.4.1 --- Fuzzifier --- p.22Chapter 2.4.2 --- Fuzzy Knowledge Base --- p.23Chapter 2.4.3 --- Fuzzy Inference Engine --- p.24Chapter 2.4.4 --- Defuzzifier --- p.28Chapter 2.5 --- Concluding Remarks --- p.29Chapter 3. --- Categories of Fuzzy Neural Systems --- p.30Chapter 3.1 --- Introduction --- p.30Chapter 3.2 --- Fuzzification of Neural Networks --- p.31Chapter 3.2.1 --- Fuzzy Membership Driven Models --- p.32Chapter 3.2.2 --- Fuzzy Operator Driven Models --- p.34Chapter 3.2.3 --- Fuzzy Arithmetic Driven Models --- p.35Chapter 3.3 --- Layered Network Implementation of Fuzzy Systems --- p.36Chapter 3.3.1 --- Mamdani's Fuzzy Systems --- p.36Chapter 3.3.2 --- Takagi and Sugeno's Fuzzy Systems --- p.37Chapter 3.3.3 --- Fuzzy Relation Based Fuzzy Systems --- p.38Chapter 3.4 --- Concluding Remarks --- p.40Chapter 4. --- Fuzzification of Competitive Learning Networks --- p.42Chapter 4.1 --- Introduction --- p.42Chapter 4.2 --- Crisp Competitive Learning --- p.44Chapter 4.2.1 --- Unsupervised Competitive Learning Algorithm --- p.46Chapter 4.2.2 --- Learning Vector Quantization Algorithm --- p.48Chapter 4.2.3 --- Frequency Sensitive Competitive Learning Algorithm --- p.50Chapter 4.3 --- Fuzzy Competitive Learning --- p.50Chapter 4.3.1 --- Unsupervised Fuzzy Competitive Learning Algorithm --- p.53Chapter 4.3.2 --- Fuzzy Learning Vector Quantization Algorithm --- p.54Chapter 4.3.3 --- Fuzzy Frequency Sensitive Competitive Learning Algorithm --- p.58Chapter 4.4 --- Stability of Fuzzy Competitive Learning --- p.58Chapter 4.5 --- Controlling the Fuzziness of Fuzzy Competitive Learning --- p.60Chapter 4.6 --- Interpretations of Fuzzy Competitive Learning Networks --- p.61Chapter 4.7 --- Simulation Results --- p.64Chapter 4.7.1 --- Performance of Fuzzy Competitive Learning Algorithms --- p.64Chapter 4.7.2 --- Performance of Monotonically Decreasing Fuzziness Control Scheme --- p.74Chapter 4.7.3 --- Interpretation of Trained Networks --- p.76Chapter 4.8 --- Concluding Remarks --- p.80Chapter 5. --- Capacity Analysis of Fuzzy Associative Memories --- p.82Chapter 5.1 --- Introduction --- p.82Chapter 5.2 --- Fuzzy Associative Memories (FAMs) --- p.83Chapter 5.3 --- Storing Multiple Rules in FAMs --- p.87Chapter 5.4 --- A High Capacity Encoding Scheme for FAMs --- p.90Chapter 5.5 --- Memory Capacity --- p.91Chapter 5.6 --- Rule Modification --- p.93Chapter 5.7 --- Inference Performance --- p.99Chapter 5.8 --- Concluding Remarks --- p.104Chapter 6. --- Capacity Analysis of Fuzzy Relational Neural Systems --- p.105Chapter 6.1 --- Introduction --- p.105Chapter 6.2 --- Fuzzy Relational Equations and Fuzzy Relational Neural Systems --- p.107Chapter 6.3 --- Solving a System of Fuzzy Relational Equations --- p.109Chapter 6.4 --- New Solvable Conditions --- p.112Chapter 6.4.1 --- Max-t Fuzzy Relational Equations --- p.112Chapter 6.4.2 --- Min-s Fuzzy Relational Equations --- p.117Chapter 6.5 --- Approximate Resolution --- p.119Chapter 6.6 --- System Capacity --- p.123Chapter 6.7 --- Inference Performance --- p.125Chapter 6.8 --- Concluding Remarks --- p.127Chapter 7. --- Structure and Parameter Identifications of Fuzzy Relational Neural Systems --- p.129Chapter 7.1 --- Introduction --- p.129Chapter 7.2 --- Modelling Nonlinear Dynamic Systems by Fuzzy Relational Equations --- p.131Chapter 7.3 --- A General FRNS Identification Algorithm --- p.138Chapter 7.4 --- An Evolutionary Computation Approach to Structure and Parameter Identifications --- p.139Chapter 7.4.1 --- Guided Evolutionary Simulated Annealing --- p.140Chapter 7.4.2 --- An Evolutionary Identification (EVIDENT) Algorithm --- p.143Chapter 7.5 --- Simulation Results --- p.146Chapter 7.6 --- Concluding Remarks --- p.158Chapter 8. --- Conclusions --- p.159Chapter 8.1 --- Summary of Contributions --- p.160Chapter 8.1.1 --- Fuzzy Competitive Learning --- p.160Chapter 8.1.2 --- Capacity Analysis of FAM and FRNS --- p.160Chapter 8.1.3 --- Numerical Identification of FRNS --- p.161Chapter 8.2 --- Further Investigations --- p.162Appendix A Publication List of the Candidate --- p.164BIBLIOGRAPHY --- p.16

    Autour de la quantification fonctionnelle de processus gaussiens

    Get PDF
    Cette thèse a pour objectif principal l'étude de résultats asymptotiques autour de la quantification fonctionnelle. Après les résultats obtenus pour Sagna sur le rayon maximal du quantifier optimal en dimension finie, nous cherchons l'asymptotique du rayon maximal en dimension infinie, spécifiquement pour le mouvement brownien. Nous présentons aussi un nouvel algorithme stochastique en dimension finie pour trouver des quantifiers stationnaires. Nous proposons une nouvelle méthode d'estimation pour le paramètre de Hurst dans des processus gaussiens fractionnaires plus robuste pour le calcul numérique que le maximum de vraisemblance en utilisant la décomposition de Karhunen-Loève des processus gaussiens.The purpose of the present thesis is to study the theory of functional quantization for some Gaussian process. Our goal is to investigate some general asymptotic properties of the quantization error and concepts related as the maximal radius of the optimal quantizer. We also develop a new method based on the Karhunen-Loève expansion of fractional Gaussian process to estimate the Hurst parameter associated to this processes. We derive a new stochastic algorithm mainly based on the Competitive Learning Vector Quantization (CLVQ). We examine the convergence of this method and present some numerical results of it behaviour
    corecore