3,358 research outputs found

    The role of the discrete-time Kalman-Yakubovitch-Popov lemma in designing statistically optimum FIR orthonormal filter banks

    Get PDF
    We introduce a new approach to design FIR energy compaction filters of arbitrary order N. The optimization of such filters is important due to their close connection to the design of an M-channel orthonormal filter bank adapted to the input signal statistics. The novel procedure finds the optimum product filter Fopt(Z)=H opt(Z)Hopt(Z^-1) corresponding to the compaction filter Hopt(z). The idea is to express F(z) as D(z)+D(z^-1) and reformulate the compaction problem in terms of the state space realization of the causal function D(z). For a fixed input power spectrum, the resulting filter Fopt(z) is guaranteed to be a global optimum due to the convexity of the new formulation. The new design method can be solved quite efficiently and with great accuracy using recently developed interior point methods and is extremely general in the sense that it works for any chosen M and any arbitrary filter length N. Finally, obtaining Hopt(z) from F opt(z) does not require an additional spectral factorization step. The minimum phase spectral factor can be obtained automatically by relating the state space realization of Dopt(z) to that of H opt(z)

    Efficient algorithm for solving semi-infinite programming problems and their applications to nonuniform filter bank designs

    Get PDF
    An efficient algorithm for solving semi-infinite programming problems is proposed in this paper. The index set is constructed by adding only one of the most violated points in a refined set of grid points. By applying this algorithm for solving the optimum nonuniform symmetric/antisymmetric linear phase finite-impulse-response (FIR) filter bank design problems, the time required to obtain a globally optimal solution is much reduced compared with that of the previous proposed algorith

    On the eigenfilter design method and its applications: a tutorial

    Get PDF
    The eigenfilter method for digital filter design involves the computation of filter coefficients as the eigenvector of an appropriate Hermitian matrix. Because of its low complexity as compared to other methods as well as its ability to incorporate various time and frequency-domain constraints easily, the eigenfilter method has been found to be very useful. In this paper, we present a review of the eigenfilter design method for a wide variety of filters, including linear-phase finite impulse response (FIR) filters, nonlinear-phase FIR filters, all-pass infinite impulse response (IIR) filters, arbitrary response IIR filters, and multidimensional filters. Also, we focus on applications of the eigenfilter method in multistage filter design, spectral/spacial beamforming, and in the design of channel-shortening equalizers for communications applications

    Linear Invariant Systems Theory for Signal Enhancement

    Get PDF
    This paper discusses a linear time invariant (LTI) systems approach to signal enhancement via projective subspace techniques. It provides closed form expressions for the frequency response of data adaptive finite impulse response eigenfilters. An illustrative example using speech enhancement is also presented.Este artigo apresenta a aplicação da teoria de sistemas lineares invariantes no tempo (LTI) na análise de técnicas de sub-espaço. A resposta em frequência dos filtros resultantes da decomposição em valores singulares é obtida aplicando as propriedades dos sistemas LTI

    Results on principal component filter banks: colored noise suppression and existence issues

    Get PDF
    We have made explicit the precise connection between the optimization of orthonormal filter banks (FBs) and the principal component property: the principal component filter bank (PCFB) is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This explains PCFB optimality for compression, progressive transmission, and various hitherto unnoticed white-noise, suppression applications such as subband Wiener filtering. The present work examines the nature of the FB optimization problems for such schemes when PCFBs do not exist. Using the geometry of the optimization search spaces, we explain exactly why these problems are usually analytically intractable. We show the relation between compaction filter design (i.e., variance maximization) and optimum FBs. A sequential maximization of subband variances produces a PCFB if one exists, but is otherwise suboptimal for several concave objectives. We then study PCFB optimality for colored noise suppression. Unlike the case when the noise is white, here the minimization objective is a function of both the signal and the noise subband variances. We show that for the transform coder class, if a common signal and noise PCFB (KLT) exists, it is, optimal for a large class of concave objectives. Common PCFBs for general FB classes have a considerably more restricted optimality, as we show using the class of unconstrained orthonormal FBs. For this class, we also show how to find an optimum FB when the signal and noise spectra are both piecewise constant with all discontinuities at rational multiples of π

    Filterbank optimization with convex objectives and the optimality of principal component forms

    Get PDF
    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been known to satisfy the principal component property, i.e., it minimizes the mean-square error caused by reconstruction after dropping the P weakest (lowest variance) subbands for any P. We point out a much stronger connection between this property and the optimality of the FB. The main result is that a principal component FB (PCFB) is optimum whenever the minimization objective is a concave function of the subband variances produced by the FB. This result has its grounding in majorization and convex function theory and, in particular, explains the optimality of PCFBs for compression. We use the result to show various other optimality properties of PCFBs, especially for noise-suppression applications. Suppose the FB input is a signal corrupted by additive white noise, the desired output is the pure signal, and the subbands of the FB are processed to minimize the output noise. If each subband processor is a zeroth-order Wiener filter for its input, we can show that the expected mean square value of the output noise is a concave function of the subband signal variances. Hence, a PCFB is optimum in the sense of minimizing this mean square error. The above-mentioned concavity of the error and, hence, PCFB optimality, continues to hold even with certain other subband processors such as subband hard thresholds and constant multipliers, although these are not of serious practical interest. We prove that certain extensions of this PCFB optimality result to cases where the input noise is colored, and the FB optimization is over a larger class that includes biorthogonal FBs. We also show that PCFBs do not exist for the classes of DFT and cosine-modulated FBs

    Computation of the para-pseudoinverse for oversampled filter banks: Forward and backward Greville formulas

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Frames and oversampled filter banks have been extensively studied over the past few years due to their increased design freedom and improved error resilience. In frame expansions, the least square signal reconstruction operator is called the dual frame, which can be obtained by choosing the synthesis filter bank as the para-pseudoinverse of the analysis bank. In this paper, we study the computation of the dual frame by exploiting the Greville formula, which was originally derived in 1960 to compute the pseudoinverse of a matrix when a new row is appended. Here, we first develop the backward Greville formula to handle the case of row deletion. Based on the forward Greville formula, we then study the computation of para-pseudoinverse for extended filter banks and Laplacian pyramids. Through the backward Greville formula, we investigate the frame-based error resilient transmission over erasure channels. The necessary and sufficient condition for an oversampled filter bank to be robust to one erasure channel is derived. A postfiltering structure is also presented to implement the para-pseudoinverse when the transform coefficients in one subband are completely lost
    corecore