9,482 research outputs found

    Cyclic LTI systems in digital signal processing

    Get PDF
    Cyclic signal processing refers to situations where all the time indices are interpreted modulo some integer L. In such cases, the frequency domain is defined as a uniform discrete grid (as in L-point DFT). This offers more freedom in theoretical as well as design aspects. While circular convolution has been the centerpiece of many algorithms in signal processing for decades, such freedom, especially from the viewpoint of linear system theory, has not been studied in the past. In this paper, we introduce the fundamentals of cyclic multirate systems and filter banks, presenting several important differences between the cyclic and noncyclic cases. Cyclic systems with allpass and paraunitary properties are studied. The paraunitary interpolation problem is introduced, and it is shown that the interpolation does not always succeed. State-space descriptions of cyclic LTI systems are introduced, and the notions of reachability and observability of state equations are revisited. It is shown that unlike in traditional linear systems, these two notions are not related to the system minimality in a simple way. Throughout the paper, a number of open problems are pointed out from the perspective of the signal processor as well as the system theorist

    The role of integer matrices in multidimensional multirate systems

    Get PDF
    The basic building blocks in a multidimensional (MD) multirate system are the decimation matrix M and the expansion matrix L. For the D-dimensional case these are D×D nonsingular integer matrices. When these matrices are diagonal, most of the one-dimensional (ID) results can be extended automatically. However, for the nondiagonal case, these extensions are nontrivial. Some of these extensions, e.g., polyphase decomposition and maximally decimated perfect reconstruction systems, have already been successfully made by some authors. However, there exist several ID results in multirate processing, for which the multidimensional extensions are even more difficult. An example is the development of polyphase representation for rational (rather than integer) sampling rate alterations. In the ID case, this development relies on the commutativity of decimators and expanders, which is possible whenever M and L are relatively prime (coprime). The conditions for commutativity in the two-dimensional (2D) case have recently been developed successfully in [1]. In the MD case, the results are more involved. In this paper we formulate and solve a number of problems of this nature. Our discussions are based on several key properties of integer matrices, including greatest common divisors and least common multiples, which we first review. These properties are analogous to those of polynomial matrices, some of which have been used in system theoretic work (e.g., matrix fraction descriptions, coprime matrices, Smith form, and so on)

    Passive cascaded-lattice structures for low-sensitivity FIR filter design, with applications to filter banks

    Get PDF
    A class of nonrecursive cascaded-lattice structures is derived, for the implementation of finite-impulse response (FIR) digital filters. The building blocks are lossless and the transfer function can be implemented as a sequence of planar rotations. The structures can be used for the synthesis of any scalar FIR transfer function H(z) with no restriction on the location of zeros; at the same time, all the lattice coefficients have magnitude bounded above by unity. The structures have excellent passband sensitivity because of inherent passivity, and are automatically internally scaled, in an L_2 sense. The ideas are also extended for the realization of a bank of MFIR transfer functions as a cascaded lattice. Applications of these structures in subband coding and in multirate signal processing are outlined. Numerical design examples are included

    Two-channel perfect-reconstruction FIR QMF structures which yield linear-phase analysis and synthesis filters

    Get PDF
    Two perfect-reconstruction structures for the two-channel quadrature mirror filter (QMF) bank, free of aliasing and distortions of any kind, in which the analysis filters have linear phase, are described. The structure in the first case is related to the linear prediction lattice structure. For the second case, new structures are developed by propagating the perfect-reconstruction and linear-phase properties. Design examples, based on optimization of the parameters in the lattice structures, are presented for both cases

    Filterbank optimization with convex objectives and the optimality of principal component forms

    Get PDF
    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been known to satisfy the principal component property, i.e., it minimizes the mean-square error caused by reconstruction after dropping the P weakest (lowest variance) subbands for any P. We point out a much stronger connection between this property and the optimality of the FB. The main result is that a principal component FB (PCFB) is optimum whenever the minimization objective is a concave function of the subband variances produced by the FB. This result has its grounding in majorization and convex function theory and, in particular, explains the optimality of PCFBs for compression. We use the result to show various other optimality properties of PCFBs, especially for noise-suppression applications. Suppose the FB input is a signal corrupted by additive white noise, the desired output is the pure signal, and the subbands of the FB are processed to minimize the output noise. If each subband processor is a zeroth-order Wiener filter for its input, we can show that the expected mean square value of the output noise is a concave function of the subband signal variances. Hence, a PCFB is optimum in the sense of minimizing this mean square error. The above-mentioned concavity of the error and, hence, PCFB optimality, continues to hold even with certain other subband processors such as subband hard thresholds and constant multipliers, although these are not of serious practical interest. We prove that certain extensions of this PCFB optimality result to cases where the input noise is colored, and the FB optimization is over a larger class that includes biorthogonal FBs. We also show that PCFBs do not exist for the classes of DFT and cosine-modulated FBs
    corecore