272 research outputs found

    Lossy Compression of Exponential and Laplacian Sources using Expansion Coding

    Full text link
    A general method of source coding over expansion is proposed in this paper, which enables one to reduce the problem of compressing an analog (continuous-valued source) to a set of much simpler problems, compressing discrete sources. Specifically, the focus is on lossy compression of exponential and Laplacian sources, which is subsequently expanded using a finite alphabet prior to being quantized. Due to decomposability property of such sources, the resulting random variables post expansion are independent and discrete. Thus, each of the expanded levels corresponds to an independent discrete source coding problem, and the original problem is reduced to coding over these parallel sources with a total distortion constraint. Any feasible solution to the optimization problem is an achievable rate distortion pair of the original continuous-valued source compression problem. Although finding the solution to this optimization problem at every distortion is hard, we show that our expansion coding scheme presents a good solution in the low distrotion regime. Further, by adopting low-complexity codes designed for discrete source coding, the total coding complexity can be tractable in practice.Comment: 8 pages, 3 figure

    Incremental Refinements and Multiple Descriptions with Feedback

    Get PDF
    It is well known that independent (separate) encoding of K correlated sources may incur some rate loss compared to joint encoding, even if the decoding is done jointly. This loss is particularly evident in the multiple descriptions problem, where the sources are repetitions of the same source, but each description must be individually good. We observe that under mild conditions about the source and distortion measure, the rate ratio Rindependent(K)/Rjoint goes to one in the limit of small rate/high distortion. Moreover, we consider the excess rate with respect to the rate-distortion function, Rindependent(K, M) - R(D), in M rounds of K independent encodings with a final distortion level D. We provide two examples - a Gaussian source with mean-squared error and an exponential source with one-sided error - for which the excess rate vanishes in the limit as the number of rounds M goes to infinity, for any fixed D and K. This result has an interesting interpretation for a multi-round variant of the multiple descriptions problem, where after each round the encoder gets a (block) feedback regarding which of the descriptions arrived: In the limit as the number of rounds M goes to infinity (i.e., many incremental rounds), the total rate of received descriptions approaches the rate-distortion function. We provide theoretical and experimental evidence showing that this phenomenon is in fact more general than in the two examples above.Comment: 62 pages. Accepted in the IEEE Transactions on Information Theor

    An Orthogonality Principle for Select-Maximum Estimation of Exponential Variables

    Full text link
    It was recently proposed to encode the one-sided exponential source X via K parallel channels, Y1, ..., YK , such that the error signals X - Yi, i = 1,...,K, are one-sided exponential and mutually independent given X. Moreover, it was shown that the optimal estimator \hat{Y} of the source X with respect to the one-sided error criterion, is simply given by the maximum of the outputs, i.e., \hat{Y} = max{Y1,..., YK}. In this paper, we show that the distribution of the resulting estimation error X - \hat{Y} , is equivalent to that of the optimum noise in the backward test-channel of the one-sided exponential source, i.e., it is one-sided exponentially distributed and statistically independent of the joint output Y1,...,YK.Comment: 5 pages. Submitted to ISI

    A stochastic algorithm for probabilistic independent component analysis

    Full text link
    The decomposition of a sample of images on a relevant subspace is a recurrent problem in many different fields from Computer Vision to medical image analysis. We propose in this paper a new learning principle and implementation of the generative decomposition model generally known as noisy ICA (for independent component analysis) based on the SAEM algorithm, which is a versatile stochastic approximation of the standard EM algorithm. We demonstrate the applicability of the method on a large range of decomposition models and illustrate the developments with experimental results on various data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS499 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Approximate trigonometric expansions with applications to signal decomposition and coding

    Get PDF
    Signal representation and data coding for multi-dimensional signals have recently received considerable attention due to their importance to several modern technologies. Many useful contributions have been reported that employ wavelets and transform methods. For signal representation, it is always desired that a signal be represented using minimum number of parameters. The transform efficiency and ease of its implementation are to a large extent mutually incompatible. If a stationary process is not periodic, then the coefficients of its Fourier expansion are not uncorrelated. With the exception of periodic signals the expansion of such a process as a superposition of exponentials, particularly in the study of linear systems, needs no elaboration. In this research, stationary and non-periodic signals are represented using approximate trigonometric expansions. These expansions have a user-defined parameter which can be used for making the transformation a signal decomposition tool. It is shown that fast implementation of these expansions is possible using wavelets. These approximate trigonometric expansions are applied to multidimensional signals in a constrained environment where dominant coefficients of the expansion are retained and insignificant ones are set to zero. The signal is then reconstructed using these limited set of coefficients, thus leading to compression. Sample results for representing multidimensional signals are given to illustrate the efficiency of the proposed method. It is verified that for a given number of coefficients, the proposed technique yields higher signal to noise ratio than conventional techniques employing the discrete cosine transform technique

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore