397 research outputs found

    Coding gain in paraunitary analysis/synthesis systems

    Get PDF
    A formal proof that bit allocation results hold for the entire class of paraunitary subband coders is presented. The problem of finding an optimal paraunitary subband coder, so as to maximize the coding gain of the system, is discussed. The bit allocation problem is analyzed for the case of the paraunitary tree-structured filter banks, such as those used for generating orthonormal wavelets. The even more general case of nonuniform filter banks is also considered. In all cases it is shown that under optimal bit allocation, the variances of the errors introduced by each of the quantizers have to be equal. Expressions for coding gains for these systems are derived

    Orthonormal and biorthonormal filter banks as convolvers, and convolutional coding gain

    Get PDF
    Convolution theorems for filter bank transformers are introduced. Both uniform and nonuniform decimation ratios are considered, and orthonormal as well as biorthonormal cases are addressed. All the theorems are such that the original convolution reduces to a sum of shorter, decoupled convolutions in the subbands. That is, there is no need to have cross convolution between subbands. For the orthonormal case, expressions for optimal bit allocation and the optimized coding gain are derived. The contribution to coding gain comes partly from the nonuniformity of the signal spectrum and partly from nonuniformity of the filter spectrum. With one of the convolved sequences taken to be the unit pulse function,,e coding gain expressions reduce to those for traditional subband and transform coding. The filter-bank convolver has about the same computational complexity as a traditional convolver, if the analysis bank has small complexity compared to the convolution itself

    Locally adaptive vector quantization: Data compression with feature preservation

    Get PDF
    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process

    Implementation issues in source coding

    Get PDF
    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    Vector space framework for unification of one- and multidimensional filter bank theory

    Get PDF
    A number of results in filter bank theory can be viewed using vector space notations. This simplifies the proofs of many important results. In this paper, we first introduce the framework of vector space, and then use this framework to derive some known and some new filter bank results as well. For example, the relation among the Hermitian image property, orthonormality, and the perfect reconstruction (PR) property is well-known for the case of one-dimensional (1-D) analysis/synthesis filter banks. We can prove the same result in a more general vector space setting. This vector space framework has the advantage that even the most general filter banks, namely, multidimensional nonuniform filter banks with rational decimation matrices, become a special case. Many results in 1-D filter bank theory are hence extended to the multidimensional case, with some algebraic manipulations of integer matrices. Some examples are: the equivalence of biorthonormality and the PR property, the interchangeability of analysis and synthesis filters, the connection between analysis/synthesis filter banks and synthesis/analysis transmultiplexers, etc. Furthermore, we obtain the subband convolution scheme by starting from the generalized Parseval's relation in vector space. Several theoretical results of wavelet transform can also be derived using this framework. In particular, we derive the wavelet convolution theorem

    New Directions in Subband Coding

    Get PDF
    Two very different subband coders are described. The first is a modified dynamic bit-allocation-subband coder (D-SBC) designed for variable rate coding situations and easily adaptable to noisy channel environments. It can operate at rates as low as 12 kb/s and still give good quality speech. The second coder is a 16-kb/s waveform coder, based on a combination of subband coding and vector quantization (VQ-SBC). The key feature of this coder is its short coding delay, which makes it suitable for real-time communication networks. The speech quality of both coders has been enhanced by adaptive postfiltering. The coders have been implemented on a single AT&T DSP32 signal processo
    corecore