11 research outputs found

    Suboptimality of the Karhunen-LoĂšve transform for transform coding

    Get PDF
    We examine the performance of the Karhunen-Loeve transform (KLT) for transform coding applications. The KLT has long been viewed as the best available block transform for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. This paper treats fixed-rate and variable-rate transform codes of non-Gaussian sources. The fixed-rate approach uses an optimal fixed-rate scalar quantizer to describe the transform coefficients; the variable-rate approach uses a uniform scalar quantizer followed by an optimal entropy code, and each quantized component is encoded separately. Earlier work shows that for the variable-rate case there exist sources on which the KLT is not unique and the optimal quantization and coding stage matched to a "worst" KLT yields performance as much as 1.5 dB worse than the optimal quantization and coding stage matched to a "best" KLT. In this paper, we strengthen that result to show that in both the fixed-rate and the variable-rate coding frameworks there exist sources for which the performance penalty for using a "worst" KLT can be made arbitrarily large. Further, we demonstrate in both frameworks that there exist sources for which even a best KLT gives suboptimal performance. Finally, we show that even for vector sources where the KLT yields independent coefficients, the KLT can be suboptimal for fixed-rate coding

    ICA based algorithms for computing optimal 1-D linear block transforms in variable high-rate source coding

    No full text
    International audienceThe Karhunen-LoĂšve Transform (KLT) is optimal for transform coding of Gaussian sources, however, it is not optimal, in general, for non-Gaussian sources. Furthermore, under the high-resolution quantization hypothesis, nearly everything is known about the performance of a transform coding system with entropy constrained scalar quantization and mean-square distortion. It is then straightforward to find a criterion that, when minimized, gives the optimal linear transform under the abovementioned conditions. However, the optimal transform computation is generally considered as a difficult task and the Gaussian assumption is then used in order to simplify the calculus. In this paper, we present the abovementioned criterion as a contrast of independent component analysis modified by an additional term which is a penalty to non-orthogonality. Then we adapt the icainf algorithm by Pham in order to compute the transform minimizing the criterion either with no constraint or with the orthogonality constraint. Finally, experimental results show that the transforms we introduced can (1) outperform the KLT on synthetic signals, (2) achieve slightly better PSNR for high-rates and better visual quality (preservation of lines and contours) for medium-to-low rates than the KLT and 2-D DCT on grayscale natural images

    Dimensionality Reduction for Distributed Estimation in the Infinite Dimensional Regime

    Get PDF
    Distributed estimation of an unknown signal is a common task in sensor networks. The scenario usually envisioned consists of several nodes, each making an observation correlated with the signal of interest. The acquired data is then wirelessly transmitted to a central reconstruction point that aims at estimating the desired signal within a prescribed accuracy. Motivated by the obvious processing limitations inherent to such distributed infrastructures, we seek to find efficient compression schemes that account for limited available power and communication bandwidth. In this paper, we propose a transform-based approach to this problem where each sensor provides the central reconstruction point with a low-dimensional approximation of its local observation by means of a suitable linear transform. Under the mean-squared error criterion, we derive the optimal solution to apply at one sensor assuming all else being fixed. This naturally leads to an iterative algorithm whose optimality properties are exemplified using a simple though illustrative correlation model. The stationarity issue is also investigated. Under restrictive assumptions, we then provide an asymptotic distortion analysis, as the size of the observed vectors becomes large. Our derivation relies on a variation of the Toeplitz distribution theorem which allows to provide a reverse "water-filling" perspective to the problem of optimal dimensionality reduction. We illustrate, with a first-order Gauss-Markov model, how our findings allow to compute analytical closed-form distortion formulas that provide an accurate estimation of the reconstruction error obtained in the finite dimensional regime

    Entropy encoding, hilbert space, and karhunen-loĂšve transforms

    Get PDF
    By introducing Hilbert space and operators, we show how probabilities, approximations, and entropy encoding from signal and image processing allow precise formulas and quantitative estimates. Our main results yield orthogonal bases which optimize distinct measures of data encoding

    Constant-SNR, rate control and entropy coding for predictive lossy hyperspectral image compression

    Get PDF
    Predictive lossy compression has been shown to represent a very flexible framework for lossless and lossy onboard compression of multispectral and hyperspectral images with quality and rate control. In this paper, we improve predictive lossy compression in several ways, using a standard issued by the Consultative Committee on Space Data Systems, namely CCSDS-123, as an example of application. First, exploiting the flexibility in the error control process, we propose a constant-signal-to-noise-ratio algorithm that bounds the maximum relative error between each pixel of the reconstructed image and the corresponding pixel of the original image. This is very useful to avoid low-energy areas of the image being affected by large errors. Second, we propose a new rate control algorithm that has very low complexity and provides performance equal to or better than existing work. Third, we investigate several entropy coding schemes that can speed up the hardware implementation of the algorithm and, at the same time, improve coding efficiency. These advances make predictive lossy compression an extremely appealing framework for onboard systems due to its simplicity, flexibility, and coding efficiency

    Rate-Constrained Collaborative Noise Reduction for Wireless Hearing Aids

    Get PDF
    Hearing aids are electronic, battery-operated sensing devices which aim at compensating various kinds of hearing impairments. Recent advances in low-power electronics coupled with progresses made in digital signal processing offer the potential for substantial improvements over state-of-the-art systems. Nevertheless, efficient noise reduction in complex listening scenarios remains a challenging task, partly due to the limited number of microphones that can be integrated on such devices. We investigate the noise reduction capability of hearing instruments that may exchange data by means of a rate-constrained wireless link and thus benefit from the signals recorded at both ears of the user. We provide the necessary theoretical results to analyze this collaboration mechanism under two different coding strategies. The first approach takes full benefit of the binaural correlation, while the second neglects it, since binaural statistics are difficult to estimate in a practical setting. The gain achieved by collaborating hearing aids as a function of the communication bit rate is then characterized, both in a monaural and a binaural configuration. The corresponding optimal rate allocation strategies are computed in closed form. While the analytical derivation is limited to a simple acoustic scenario, the latter is shown to capture many of the features of the general problem. In particular, it is observed that the loss incurred by coding schemes which do not consider the binaural correlation is rather negligible in a very noisy environment. Finally, numerical results obtained using real measurements corroborate the potential of our approach in a realistic scenario

    Optimal Filter Banks for Multiple Description Coding: Analysis and Synthesis

    Get PDF
    Multiple description (MD) coding is a source coding technique for information transmission over unreliable networks. In MD coding, the coder generates several different descriptions of the same signal and the decoder can produce a useful reconstruction of the source with any received subset of these descriptions. In this paper, we study the problem of MD coding of stationary Gaussian sources with memory. First, we compute an approximate MD rate distortion region for these sources, which we prove to be asymptotically tight at high rates. This region generalizes the MD rate distortion region of El Gamal and Cover (1982), and Ozarow (1980) for memoryless Gaussian sources. Then, we develop an algorithm for the design of optimal two-channel biorthogonal filter banks for MD coding of Gaussian sources. We show that optimal filters are obtained by allocating the redundancy over frequency with a reverse "water-filling" strategy. Finally, we present experimental results which show the effectiveness of our filter banks in the low complexity, low rate regim

    Suboptimality of the Karhunen–LoÈve Transform for Transform Coding

    No full text

    Suboptimality of the Karhunen-LoĂšve Transform for Fixed-Rate Transform Coding

    No full text
    An open problem in source coding theory has been whether the Karhunen-Loeve transform (KLT) is optimal for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. Huang and Schultheiss proved in 1963 that for a Gaussian source the KLT is mean squared optimal in the limit of high quantizer resolution. It is often assumed and stated in the literature that the KLT is also optimal in general for nonGaussian sources. We disprove such assertions by demonstrating that the KLT is not optimal for certain nearly bimodal Gaussian and uniform sources. In addition, we show the unusual result that for vector sources with independent identically distributed Laplacian components, the distortion resulting from scalar quantizing the components can be reduced by including an orthogonal transform that adds intercomponent dependency

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore