551 research outputs found

    Multiresolution vector quantization

    Get PDF
    Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes

    Vector quantization

    Get PDF
    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts

    Perceptually-Driven Video Coding with the Daala Video Codec

    Full text link
    The Daala project is a royalty-free video codec that attempts to compete with the best patent-encumbered codecs. Part of our strategy is to replace core tools of traditional video codecs with alternative approaches, many of them designed to take perceptual aspects into account, rather than optimizing for simple metrics like PSNR. This paper documents some of our experiences with these tools, which ones worked and which did not. We evaluate which tools are easy to integrate into a more traditional codec design, and show results in the context of the codec being developed by the Alliance for Open Media.Comment: 19 pages, Proceedings of SPIE Workshop on Applications of Digital Image Processing (ADIP), 201

    Dynaamisten mallien puoliautomaattinen parametrisointi käyttäen laitosdataa

    Get PDF
    The aim of this thesis was to develop a new methodology for estimating parameters of NAPCON ProsDS dynamic simulator models to better represent data containing several operating points. Before this thesis, no known methodology had existed for combining operating point identification with parameter estimation of NAPCON ProsDS simulator models. The methodology was designed by assessing and selecting suitable methods for operating space partitioning, parameter estimation and parameter scheduling. Previously implemented clustering algorithms were utilized for the operating space partition. Parameter estimation was implemented as a new tool in the NAPCON ProsDS dynamic simulator and iterative parameter estimation methods were applied. Finally, lookup tables were applied for tuning the model parameters according to the state. The methodology was tested by tuning a heat exchanger model to several operating points based on plant process data. The results indicated that the developed methodology was able to tune the simulator model to better represent several operating states. However, more testing with different models is required to verify general applicability of the methodology.Tämän diplomityön tarkoitus oli kehittää uusi parametrien estimointimenetelmä NAPCON ProsDS -simulaattorin dynaamisille malleille, jotta ne vastaisivat paremmin dataa useista prosessitiloista. Ennen tätä diplomityötä NAPCON ProsDS -simulaattorin malleille ei ollut olemassa olevaa viritysmenetelmää, joka yhdistäisi operointitilojen tunnistuksen parametrien estimointiin. Menetelmän kehitystä varten tutkittiin ja valittiin sopivat menetelmät operointiavaruuden jakamiselle, parametrien estimoinnille ja parametrien virittämiseen prosessitilan mukaisesti. Aikaisemmin ohjelmoituja klusterointialgoritmeja hyödynnettiin operointiavaruuden jakamisessa. Parametrien estimointi toteutettiin uutena työkaluna NAPCON ProsDS -simulaattoriin ja estimoinnissa käytettiin iteratiivisia optimointimenetelmiä. Lopulta hakutaulukoita sovellettiin mallin parametrien hienosäätöön prosessitilojen mukaisesti. Menetelmää testattiin virittämällä lämmönvaihtimen malli kahteen eri prosessitilaan käyttäen laitokselta kerättyä prosessidataa. Tulokset osoittavat että kehitetty menetelmä pystyi virittämään simulaattorin mallin vastaamaan paremmin dataa useista prosessitiloista. Kuitenkin tarvitaan lisää testausta erityyppisten mallien kanssa, jotta voidaan varmistaa menetelmän yleinen soveltuvuus

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
    corecore