1,001 research outputs found

    Unified Theory for Biorthogonal Modulated Filter Banks

    Get PDF
    Modulated filter banks (MFBs) are practical signal decomposition tools for M -channel multirate systems. They combine high subfilter selectivity with efficient realization based on polyphase filters and block transforms. Consequently, the O(M 2 ) burden of computations in a general filter bank (FB) is reduced to O(M log2 M ) - the latter being a complexity order comparable with the FFT-like transforms.Often hiding from the plain sight, these versatile digital signal processing tools have important role in various professional and everyday life applications of information and communications technology, including audiovisual communications and media storage (e.g., audio codecs for low-energy music playback in portable devices, as well as communication waveform processing and channelization). The algorithmic efficiency implies low cost, small size, and extended battery life, bringing the devices close to our skins.The main objective of this thesis is to formulate a generalized and unified approach to the MFBs, which includes, in addition to the deep theoretical background behind these banks, both their design by using appropriate optimization techniques and efficient algorithmic realizations. The FBs discussed in this thesis are discrete-time time-frequency decomposition/reconstruction, or equivalently, analysis-synthesis systems, where the subfilters are generated through modulation from either a single or two prototype filters. The perfect reconstruction (PR) property is a particularly important characteristics of the MFBs and this is the core theme of this thesis. In the presented biorthogonal arbitrary-delay exponentially modulated filter bank (EMFB), the PR property can be maintained also for complex-valued signals.The EMFB concept is quite flexible, since it may respond to the various requirements given to a subband processing system: low-delay PR prototype design, subfilters having symmetric impulse responses, efficient algorithms, and the definition covers odd and even-stacked cosine-modulated FBs as special cases. Oversampling schemes for the subsignals prove out to be advantageous in subband processing problems requiring phase information about the localized frequency components. In addition, the MFBs have strong connections with the lapped transform (LT) theory, especially with the class of LTs grounded in parametric window functions.<br/

    An FPGA Implementation of HW/SW Codesign Architecture for H.263 Video Coding

    Get PDF
    Chapitre 12 http://www.intechopen.com/download/pdf/pdfs_id/1574

    Zolotarev polynomials utilization in spectral analysis

    Get PDF
    Tato práce je zaměřena na vybrané problémy Zolotarevových polynomů a jejich vyuľití ke spektrální analýze. Pokud jde o Zolotarevovy polynomy, jsou popsány základní vlastnosti symetrických Zolotarevových polynomů včetně ortogonality. Rovněľ se provádí prozkoumání numerických vlastností algoritmů generujících dokonce Zolotarevovy polynomy. Pokud jde o aplikaci Zolotarevových polynomů na spektrální analýzu, je implementována aproximovaná diskrétní Zolotarevova transformace, která umoľňuje výpočet spektrogramu (zologramu) v reálném čase. Aproximovaná diskrétní zolotarevská transformace je navíc upravena tak, aby lépe fungovala při analýze tlumených exponenciálních signálů. A nakonec je navrľena nová diskrétní Zolotarevova transformace implementovaná plně v časové oblasti. Tato transformace také ukazuje, ľe některé rysy pozorované u aproximované diskrétní Zolotarevovy transformace jsou důsledkem pouľití Zolotarevových polynomů.This thesis is focused on selected problems of symmetrical Zolotarev polynomials and their use in spectral analysis. Basic properties of symmetrical Zolotarev polynomials including orthogonality are described. Also, the exploration of numerical properties of algorithms generating even Zolotarev polynomials is performed. As regards to the application of Zolotarev polynomials to spectral analysis the Approximated Discrete Zolotarev Transform is implemented so that it enables computing of zologram in real–time. Moreover, the Approximated Discrete Zolotarev Transform is modified to perform better in the analysis of damped exponential signals. And finally, a novel Discrete Zolotarev Transform implemented fully in the time domain is suggested. This transform also shows that some features observed using the Approximated Discrete Zolotarev Transform are a consequence of using Zolotarev polynomials

    Efficient compression of motion compensated residuals

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Theory and realization of novel algorithms for random sampling in digital signal processing

    Get PDF
    Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N(^2). The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N"^. The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% , of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement instruments where computing a spectrum is either minimal or not required. Such applications in instrumentation are easily found in the literature. In this thesis, two applications in digital signal processing are introduced. (5) To suggest an inverse transformation for random sampling so as to complete a two-way process and to broaden its scope of application. Apart from the above, a case study of realizing in a transputer network the prime factor algorithm with regular sampling is given in Chapter 2 and a rough estimation of the signal-to-noise ratio for a spectrum obtained from random sampling is found in Chapter 3. Although random sampling is alias-free, problems in computational complexity and noise prevent it from being adopted widely in engineering applications. In the conclusions, the criteria for adopting random sampling are put forward and the directions for its development are discussed

    Energy efficient hardware acceleration of multimedia processing tools

    Get PDF
    The world of mobile devices is experiencing an ongoing trend of feature enhancement and generalpurpose multimedia platform convergence. This trend poses many grand challenges, the most pressing being their limited battery life as a consequence of delivering computationally demanding features. The envisaged mobile application features can be considered to be accelerated by a set of underpinning hardware blocks Based on the survey that this thesis presents on modem video compression standards and their associated enabling technologies, it is concluded that tight energy and throughput constraints can still be effectively tackled at algorithmic level in order to design re-usable optimised hardware acceleration cores. To prove these conclusions, the work m this thesis is focused on two of the basic enabling technologies that support mobile video applications, namely the Shape Adaptive Discrete Cosine Transform (SA-DCT) and its inverse, the SA-IDCT. The hardware architectures presented in this work have been designed with energy efficiency in mind. This goal is achieved by employing high level techniques such as redundant computation elimination, parallelism and low switching computation structures. Both architectures compare favourably against the relevant pnor art in the literature. The SA-DCT/IDCT technologies are instances of a more general computation - namely, both are Constant Matrix Multiplication (CMM) operations. Thus, this thesis also proposes an algorithm for the efficient hardware design of any general CMM-based enabling technology. The proposed algorithm leverages the effective solution search capability of genetic programming. A bonus feature of the proposed modelling approach is that it is further amenable to hardware acceleration. Another bonus feature is an early exit mechanism that achieves large search space reductions .Results show an improvement on state of the art algorithms with future potential for even greater savings

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Three dimensional DCT based video compression.

    Get PDF
    by Chan Kwong Wing Raymond.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 115-123).Acknowledgments --- p.iTable of Contents --- p.ii-vList of Tables --- p.viList of Figures --- p.viiAbstract --- p.1Chapter Chapter 1 : --- IntroductionChapter 1.1 --- An Introduction to Video Compression --- p.3Chapter 1.2 --- Overview of Problems --- p.4Chapter 1.2.1 --- Analog Video and Digital Problems --- p.4Chapter 1.2.2 --- Low Bit Rate Application Problems --- p.4Chapter 1.2.3 --- Real Time Video Compression Problems --- p.5Chapter 1.2.4 --- Source Coding and Channel Coding Problems --- p.6Chapter 1.2.5 --- Bit-rate and Quality Problems --- p.7Chapter 1.3 --- Organization of the Thesis --- p.7Chapter Chapter 2 : --- Background and Related WorkChapter 2.1 --- Introduction --- p.9Chapter 2.1.1 --- Analog Video --- p.9Chapter 2.1.2 --- Digital Video --- p.10Chapter 2.1.3 --- Color Theory --- p.10Chapter 2.2 --- Video Coding --- p.12Chapter 2.2.1 --- Predictive Coding --- p.12Chapter 2.2.2 --- Vector Quantization --- p.12Chapter 2.2.3 --- Subband Coding --- p.13Chapter 2.2.4 --- Transform Coding --- p.14Chapter 2.2.5 --- Hybrid Coding --- p.14Chapter 2.3 --- Transform Coding --- p.15Chapter 2.3.1 --- Discrete Cosine Transform --- p.16Chapter 2.3.1.1 --- 1-D Fast Algorithms --- p.16Chapter 2.3.1.2 --- 2-D Fast Algorithms --- p.17Chapter 2.3.1.3 --- Multidimensional DCT Algorithms --- p.17Chapter 2.3.2 --- Quantization --- p.18Chapter 2.3.3 --- Entropy Coding --- p.18Chapter 2.3.3.1 --- Huffman Coding --- p.19Chapter 2.3.3.2 --- Arithmetic Coding --- p.19Chapter Chapter 3 : --- Existing Compression SchemeChapter 3.1 --- Introduction --- p.20Chapter 3.2 --- Motion JPEG --- p.20Chapter 3.3 --- MPEG --- p.20Chapter 3.4 --- H.261 --- p.22Chapter 3.5 --- Other Techniques --- p.23Chapter 3.5.1 --- Fractals --- p.23Chapter 3.5.2 --- Wavelets --- p.23Chapter 3.6 --- Proposed Solution --- p.24Chapter 3.7 --- Summary --- p.25Chapter Chapter 4 : --- Fast 3D-DCT AlgorithmsChapter 4.1 --- Introduction --- p.27Chapter 4.1.1 --- Motivation --- p.27Chapter 4.1.2 --- Potentials of 3D DCT --- p.28Chapter 4.2 --- Three Dimensional Discrete Cosine Transform (3D-DCT) --- p.29Chapter 4.2.1 --- Inverse 3D-DCT --- p.29Chapter 4.2.2 --- Forward 3D-DCT --- p.30Chapter 4.3 --- 3-D FCT (3-D Fast Cosine Transform Algorithm --- p.30Chapter 4.3.1 --- Partitioning and Rearrangement of Data Cube --- p.30Chapter 4.3.1.1 --- Spatio-temporal Data Cube --- p.30Chapter 4.3.1.2 --- Spatio-temporal Transform Domain Cube --- p.31Chapter 4.3.1.3 --- Coefficient Matrices --- p.31Chapter 4.3.2 --- 3-D Inverse Fast Cosine Transform (3-D IFCT) --- p.32Chapter 4.3.2.1 --- Matrix Representations --- p.32Chapter 4.3.2.2 --- Simplification of the calculation steps --- p.33Chapter 4.3.3 --- 3-D Forward Fast Cosine Transform (3-D FCT) --- p.35Chapter 4.3.3.1 --- Decomposition --- p.35Chapter 4.3.3.2 --- Reconstruction --- p.36Chapter 4.4 --- The Fast Algorithm --- p.36Chapter 4.5 --- Example using 4x4x4 IFCT --- p.38Chapter 4.6 --- Complexity Comparison --- p.43Chapter 4.6.1 --- Complexity of Multiplications --- p.43Chapter 4.6.2 --- Complexity of Additions --- p.43Chapter 4.7 --- Implementation Issues --- p.44Chapter 4.8 --- Summary --- p.46Chapter Chapter 5 : --- QuantizationChapter 5.1 --- Introduction --- p.49Chapter 5.2 --- Dynamic Ranges of 3D-DCT Coefficients --- p.49Chapter 5.3 --- Distribution of 3D-DCT AC Coefficients --- p.54Chapter 5.4 --- Quantization Volume --- p.55Chapter 5.4.1 --- Shifted Complement Hyperboloid --- p.55Chapter 5.4.2 --- Quantization Volume --- p.58Chapter 5.5 --- Scan Order for Quantized 3D-DCT Coefficients --- p.59Chapter 5.6 --- Finding Parameter Values --- p.60Chapter 5.7 --- Experimental Results from Using the Proposed Quantization Values --- p.65Chapter 5.8 --- Summary --- p.66Chapter Chapter 6 : --- Entropy CodingChapter 6.1 --- Introduction --- p.69Chapter 6.1.1 --- Huffman Coding --- p.69Chapter 6.1.2 --- Arithmetic Coding --- p.71Chapter 6.2 --- Zero Run-Length Encoding --- p.73Chapter 6.2.1 --- Variable Length Coding in JPEG --- p.74Chapter 6.2.1.1 --- Coding of the DC Coefficients --- p.74Chapter 6.2.1.2 --- Coding of the DC Coefficients --- p.75Chapter 6.2.2 --- Run-Level Encoding of the Quantized 3D-DCT Coefficients --- p.76Chapter 6.3 --- Frequency Analysis of the Run-Length Patterns --- p.76Chapter 6.3.1 --- The Frequency Distributions of the DC Coefficients --- p.77Chapter 6.3.2 --- The Frequency Distributions of the DC Coefficients --- p.77Chapter 6.4 --- Huffman Table Design --- p.84Chapter 6.4.1 --- DC Huffman Table --- p.84Chapter 6.4.2 --- AC Huffman Table --- p.85Chapter 6.5 --- Implementation Issue --- p.85Chapter 6.5.1 --- Get Category --- p.85Chapter 6.5.2 --- Huffman Encode --- p.86Chapter 6.5.3 --- Huffman Decode --- p.86Chapter 6.5.4 --- PutBits --- p.88Chapter 6.5.5 --- GetBits --- p.90Chapter Chapter 7 : --- "Contributions, Concluding Remarks and Future Work"Chapter 7.1 --- Contributions --- p.92Chapter 7.2 --- Concluding Remarks --- p.93Chapter 7.2.1 --- The Advantages of 3D DCT codec --- p.94Chapter 7.2.2 --- Experimental Results --- p.95Chapter 7.1 --- Future Work --- p.95Chapter 7.2.1 --- Integer Discrete Cosine Transform Algorithms --- p.95Chapter 7.2.2 --- Adaptive Quantization Volume --- p.96Chapter 7.2.3 --- Adaptive Huffman Tables --- p.96Appendices:Appendix A : The detailed steps in the simplification of Equation 4.29 --- p.98Appendix B : The program Listing of the Fast DCT Algorithms --- p.101Appendix C : Tables to Illustrate the Reording of the Quantized Coefficients --- p.110Appendix D : Sample Values of the Quantization Volume --- p.111Appendix E : A 16-bit VLC table for AC Run-Level Pairs --- p.113References --- p.11

    Real-time scalable video coding for surveillance applications on embedded architectures

    Get PDF
    corecore