232 research outputs found

    Statistically optimum pre- and postfiltering in quantization

    Get PDF
    We consider the optimization of pre- and postfilters surrounding a quantization system. The goal is to optimize the filters such that the mean square error is minimized under the key constraint that the quantization noise variance is directly proportional to the variance of the quantization system input. Unlike some previous work, the postfilter is not restricted to be the inverse of the prefilter. With no order constraint on the filters, we present closed-form solutions for the optimum pre- and postfilters when the quantization system is a uniform quantizer. Using these optimum solutions, we obtain a coding gain expression for the system under study. The coding gain expression clearly indicates that, at high bit rates, there is no loss in generality in restricting the postfilter to be the inverse of the prefilter. We then repeat the same analysis with first-order pre- and postfilters in the form 1+αz-1 and 1/(1+γz^-1 ). In specific, we study two cases: 1) FIR prefilter, IIR postfilter and 2) IIR prefilter, FIR postfilter. For each case, we obtain a mean square error expression, optimize the coefficients α and γ and provide some examples where we compare the coding gain performance with the case of α=γ. In the last section, we assume that the quantization system is an orthonormal perfect reconstruction filter bank. To apply the optimum preand postfilters derived earlier, the output of the filter bank must be wide-sense stationary WSS which, in general, is not true. We provide two theorems, each under a different set of assumptions, that guarantee the wide sense stationarity of the filter bank output. We then propose a suboptimum procedure to increase the coding gain of the orthonormal filter bank

    Channel Optimized Distributed Multiple Description Coding

    Full text link
    In this paper, channel optimized distributed multiple description vector quantization (CDMD) schemes are presented for distributed source coding in symmetric and asymmetric settings. The CDMD encoder is designed using a deterministic annealing approach over noisy channels with packet loss. A minimum mean squared error asymmetric CDMD decoder is proposed for effective reconstruction of a source, utilizing the side information (SI) and its corresponding received descriptions. The proposed iterative symmetric CDMD decoder jointly reconstructs the symbols of multiple correlated sources. Two types of symmetric CDMD decoders, namely the estimated-SI and the soft-SI decoders, are presented which respectively exploit the reconstructed symbols and a posteriori probabilities of other sources as SI in iterations. In a multiple source CDMD setting, for reconstruction of a source, three methods are proposed to select another source as its SI during the decoding. The methods operate based on minimum physical distance (in a wireless sensor network setting), maximum mutual information and minimum end-to-end distortion. The performance of the proposed systems and algorithms are evaluated and compared in detail.Comment: Submitted to IEEE Transaction on Signal Processin

    A constrained joint source/channel coder design and vector quantization of nonstationary sources

    Get PDF
    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm

    Optimal block cosine transform image coding for noisy channels

    Get PDF
    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered

    Oversampling PCM techniques and optimum noise shapers for quantizing a class of nonbandlimited signals

    Get PDF
    We consider the efficient quantization of a class of nonbandlimited signals, namely, the class of discrete-time signals that can be recovered from their decimated version. The signals are modeled as the output of a single FIR interpolation filter (single band model) or, more generally, as the sum of the outputs of L FIR interpolation filters (multiband model). These nonbandlimited signals are oversampled, and it is therefore reasonable to expect that we can reap the same benefits of well-known efficient A/D techniques that apply only to bandlimited signals. We first show that we can obtain a great reduction in the quantization noise variance due to the oversampled nature of the signals. We can achieve a substantial decrease in bit rate by appropriately decimating the signals and then quantizing them. To further increase the effective quantizer resolution, noise shaping is introduced by optimizing prefilters and postfilters around the quantizer. We start with a scalar time-invariant quantizer and study two important cases of linear time invariant (LTI) filters, namely, the case where the postfilter is the inverse of the prefilter and the more general case where the postfilter is independent from the prefilter. Closed form expressions for the optimum filters and average minimum mean square error are derived in each case for both the single band and multiband models. The class of noise shaping filters and quantizers is then enlarged to include linear periodically time varying (LPTV)M filters and periodically time-varying quantizers of period M. We study two special cases in great detail

    Oblivious data hiding : a practical approach

    Get PDF
    This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed: 1. Theoretical and practical aspects of embedder-detector design. 2. Performance evaluation, and analysis of performance vs. complexity tradeoffs. 3. Some application specific implementations. A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure. The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work

    Optimal Multiresolution Quantization for Broadcast Channels with Random Index Assignment

    Get PDF
    Shannon's classical separation result holds only in the limit of infinite source code dimension and infinite channel code block length. In addition, Shannon theory does not address the design of good source codes when the probability of channel error is nonzero, which is inevitable for finite-length channel codes. Thus, for practical systems, a joint source and channel code design could improve performance for finite dimension source code and finite block length channel code, as well as complexity and delay. Consider a multicast system over a broadcast channel, where different end users typically have different capacities. To support such user or capacity diversity, it is desirable to encode the source to be broadcasted into a scalable bit stream along which multiple resolutions of the source can be reconstructed progressively from left to right. Such source coding technique is called multiresolution source coding. In wireless communications, joint source channel coding (JSCC) has attracted wide attention due to its adaptivity to time-varying channels. However, there are few works on joint source channel coding for network multicast, especially for the optimal source coding over broadcast channels. In this work, we aim at designing and analyzing the optimal multiresolution vector quantization (MRVQ) in conjunction with the subsequent broadcast channel over which the coded scalable bit stream would be transmitted. By adopting random index assignment (RIA) to link MRVQ for the source with superposition coding for the broadcast channel, we establish a closed-form formula of end-to-end distortion for a tandem system of MRVQ and a broadcast channel. From this formula we analyze the intrinsic structure of end-to-end distortion (EED) in a communication system and derive two necessary conditions for optimal multiresolution vector quantization over broadcast channels with random index assignment. According to the two necessary conditions, we propose a greedy iterative algorithm for jointly designed MRVQ with channel conditions, which depends on the channel only through several types of average channel error probabilities rather than the complete knowledge of the channel. Experiments show that MRVQ designed by the proposed algorithm significantly outperforms conventional MRVQ designed without channel information. By building an closed-form formula for the weighted EED with RIA, it also makes the computational complexity incurred during the performance analysis feasible. In comparison with MRVQ design for a fixed index assignment, the computation complexity for quantization design is significantly reduced by using random index assignment. In addition, simulations indicate that our proposed algorithm shows better robustness against channel mismatch than MRVQ design with a fixed index assignment, simply due to the nature of using only the average channel information. Therefore, we conclude that our proposed algorithm is more appropriate in both wireless communications and applications where the complete knowledge of the channel is hard to obtain. Furthermore, we propose two novel algorithms for MRVQ over broadcast channels. One aims to optimize the two corresponding quantizers at two layers alternatively and iteratively, and the other applies under the constraint that each encoding cell is convex and contains the reconstruction point. Finally, we analyze the asymptotic performance of weighted EED for the optimal joint MRVQ. The asymptotic result provides a theoretically achievable quantizer performance level and sheds light on the design of the optimal MRVQ over broadcast channel from a different aspect
    • …
    corecore