36 research outputs found

    Filter Bank Fusion Frames

    Get PDF
    In this paper we characterize and construct novel oversampled filter banks implementing fusion frames. A fusion frame is a sequence of orthogonal projection operators whose sum can be inverted in a numerically stable way. When properly designed, fusion frames can provide redundant encodings of signals which are optimally robust against certain types of noise and erasures. However, up to this point, few implementable constructions of such frames were known; we show how to construct them using oversampled filter banks. In this work, we first provide polyphase domain characterizations of filter bank fusion frames. We then use these characterizations to construct filter bank fusion frame versions of discrete wavelet and Gabor transforms, emphasizing those specific finite impulse response filters whose frequency responses are well-behaved.Comment: keywords: filter banks, frames, tight, fusion, erasures, polyphas

    Multiple-Description Coding by Dithered Delta-Sigma Quantization

    Get PDF
    We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.Comment: Revised, restructured, significantly shortened and minor typos has been fixed. Accepted for publication in the IEEE Transactions on Information Theor

    Parity-check matrix calculation for paraunitary oversampled DFT filter banks

    No full text
    International audienceOversampled filter banks, interpreted as error correction codes, were recently introduced in the literature. We here present an efficient calculation and implementation of the parity-check polynomial matrices for oversampled DFT filter banks. If desired, the calculation of the partity-check polynomials can be performed as part of the prototype filter design procedure. We compare our method to those previously presented in the literature

    Consistent Reconstruction of the Input of an Oversampled Filter Bank From Noisy Subbands

    Get PDF
    This paper introduces a reconstruction approach for the input signal of an oversampled filter bank (OFB) when the sub-bands generated at its output are quantized and transmitted over a noisy channel. This approach exploits the redundancy introduced by the OFB and the fact that the quantization noise is bounded. A maximum-likelihood estimate of the input signal is evaluated, which only considers the vectors of quantization indexes corresponding to subband signals that could have been generated by the OFB and that are compliant with the quantization errors. When considering an OFB with an oversampling ratio of 3/2 and a transmission of quantized subbands on an AWGN channel, compared to a classical decoder, the performance gains are up to 9 dB in terms of SNR for the reconstructed signal, and 3 dB in terms of channel SNR.Comment: European Signal Processing Conference (2011

    Systematic DFT Frames: Principle, Eigenvalues Structure, and Applications

    Full text link
    Motivated by a host of recent applications requiring some amount of redundancy, frames are becoming a standard tool in the signal processing toolbox. In this paper, we study a specific class of frames, known as discrete Fourier transform (DFT) codes, and introduce the notion of systematic frames for this class. This is encouraged by a new application of frames, namely, distributed source coding that uses DFT codes for compression. Studying their extreme eigenvalues, we show that, unlike DFT frames, systematic DFT frames are not necessarily tight. Then, we come up with conditions for which these frames can be tight. In either case, the best and worst systematic frames are established in the minimum mean-squared reconstruction error sense. Eigenvalues of DFT frames and their subframes play a pivotal role in this work. Particularly, we derive some bounds on the extreme eigenvalues DFT subframes which are used to prove most of the results; these bounds are valuable independently

    Quantization and erasures in frame representations

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 123-126).Frame representations, which correspond to overcomplete generalizations to basis expansions, are often used in signal processing to provide robustness to errors. In this thesis robustness is provided through the use of projections to compensate for errors in the representation coefficients, with specific focus on quantization and erasure errors. The projections are implemented by modifying the unaffected coefficients using an additive term, which is linear in the error. This low-complexity implementation only assumes linear reconstruction using a pre-determined synthesis frame, and makes no assumption on how the representation coefficients are generated. In the context of quantization, the limits of scalar quantization of frame representations are first examined, assuming the analysis is using inner products with the frame vectors. Bounds on the error and the bit-efficiency are derived, demonstrating that scalar quantization of the coefficients is suboptimal. As an alternative to scalar quantization, a generalization of Sigma-Delta noise shaping to arbitrary frame representations is developed by reformulating noise shaping as a sequence of compensations for the quantization error using projections.(cont.) The total error is quantified using both the additive noise model of quantization, and a deterministic upper bound based on the triangle inequality. It is thus shown that the average and the worst-case error is reduced compared to scalar quantization of the coefficients. The projection principle is also used to provide robustness to erasures. Specifically, the case of a transmitter that is aware of the erasure occurrence is considered, which compensates for the erasure error by projecting it to the subsequent frame vectors. It is further demonstrated that the transmitter can be split to a transmitter/receiver combination that performs the same compensation, but in which only the receiver is aware of the erasure occurrence. Furthermore, an algorithm to puncture dense representations in order to produce sparse approximate ones is introduced. In this algorithm the error due to the puncturing is also projected to the span of the remaining coefficients. The algorithm can be combined with quantization to produce quantized sparse representations approximating the original dense representation.by Petros T. Boufounos.Sc.D

    Introduction to frames

    Get PDF
    This survey gives an introduction to redundant signal representations called frames. These representations have recently emerged as yet another powerful tool in the signal processing toolbox and have become popular through use in numerous applications. Our aim is to familiarize a general audience with the area, while at the same time giving a snapshot of the current state-of-the-art
    corecore