9 research outputs found
A new convolution structure for the realisation of the discrete cosine transform
Version of RecordPublishe
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs
This paper presents a systematic methodology based on the algebraic theory of
signal processing to classify and derive fast algorithms for linear transforms.
Instead of manipulating the entries of transform matrices, our approach derives
the algorithms by stepwise decomposition of the associated signal models, or
polynomial algebras. This decomposition is based on two generic methods or
algebraic principles that generalize the well-known Cooley-Tukey FFT and make
the algorithms' derivations concise and transparent. Application to the 16
discrete cosine and sine transforms yields a large class of fast algorithms,
many of which have not been found before.Comment: 31 pages, more information at http://www.ece.cmu.edu/~smar
DCT and DST Filtering with Sparse Graph Operators
Graph filtering is a fundamental tool in graph signal processing. Polynomial
graph filters (PGFs), defined as polynomials of a fundamental graph operator,
can be implemented in the vertex domain, and usually have a lower complexity
than frequency domain filter implementations. In this paper, we focus on the
design of filters for graphs with graph Fourier transform (GFT) corresponding
to a discrete trigonometric transform (DTT), i.e., one of 8 types of discrete
cosine transforms (DCT) and 8 discrete sine transforms (DST). In this case, we
show that multiple sparse graph operators can be identified, which allows us to
propose a generalization of PGF design: multivariate polynomial graph filter
(MPGF). First, for the widely used DCT-II (type-2 DCT), we characterize a set
of sparse graph operators that share the DCT-II matrix as their common
eigenvector matrix. This set contains the well-known connected line graph.
These sparse operators can be viewed as graph filters operating in the DCT
domain, which allows us to approximate any DCT graph filter by a MPGF, leading
to a design with more degrees of freedom than the conventional PGF approach.
Then, we extend those results to all of the 16 DTTs as well as their 2D
versions, and show how their associated sets of multiple graph operators can be
determined. We demonstrate experimentally that ideal low-pass and exponential
DCT/DST filters can be approximated with higher accuracy with similar runtime
complexity. Finally, we apply our method to transform-type selection in a video
codec, AV1, where we demonstrate significant encoding time savings, with a
negligible compression loss.Comment: 16 pages, 11 figures, 5 table
Blockwise Transform Image Coding Enhancement and Edge Detection
The goal of this thesis is high quality image coding, enhancement and edge detection. A unified approach using novel fast transforms is developed to achieve all three objectives. Requirements are low bit rate, low complexity of implementation and parallel processing. The last requirement is achieved by processing the image in small blocks such that all blocks can be processed simultaneously. This is similar to biological vision. A major issue is to minimize the resulting block effects. This is done by using proper transforms and possibly an overlap-save technique. The bit rate in image coding is minimized by developing new results in optimal adaptive multistage transform coding. Newly developed fast trigonometric transforms are also utilized and compared for transform coding, image enhancement and edge detection. Both image enhancement and edge detection involve generalised bandpass filtering wit fast transforms. The algorithms have been developed with special attention to the properties of biological vision systems
Orthogonal transforms in digital image coding.
by Lo Kwok Tung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1989.Bibliography: leaves [71-74
Improved data compression techniques to analyze big data in structural health monitoring
With growing number of complex and slender structures worldwide, long-term structural
health monitoring (SHM) has been intensively pursued to retrofit and control
these structures under extreme climatic events. Modern sensing technology including
wireless sensors and high quality data acquisitions have improved the capability of
SHM where a relatively enormous amount of data could be measured remotely and
sent wirelessly for a longer period of time. Unlike wired vibration sensors, wireless
sensors are inexpensive and easier to install with less labour-intensive process, thereby
leading to a significant cost-saving to the infrastructure owner. However, the modern
sensing technology and remote data acquisition has some several limitations due to
their limited bandwidth, time synchronization and inadequate sampling issues. The
large amount of data collected from the structural systems often causes missing data,
network jam or packet loss while transmitting the big data.
In this research, the theory of compressive sampling (CS) is implemented as a
promising data compression technique that can recover undersampled vibration signals
of dynamical systems, thereby reducing overall burden of analyzing big data in
SHM. The l1-norm minimization (LNM) and discrete cosine transform (DCT) are
exploited to perform data compression and enhance data recovery of the compressed
big data. A novel time-frequency blind source separation is integrated with the data
compression technique to evaluate the accuracy of the proposed method in modal
identification. The results of the proposed data compression techniques are verified
using a suite of numerical, experimental and full-scale studies. The results reveal that DCT could be considered as a powerful data compression tool even for the vibration
data containing damage signatures, low energy modes and low signal-to-noise ratio
Sparse Fast Trigonometric Transforms
Trigonometric transforms like the Fourier transform or the discrete cosine transform (DCT) are of immense importance in signal and image processing, physics, engineering, and data processing. The research of past decades has provided us with runtime optimal algorithms for these transforms. Significant runtime improvements are only possible if there is additional a priori information about the sparsity of the signal. In the first part of this thesis we develop sublinear algorithms for the fast Fourier transform for frequency sparse periodic functions. We investigate three classes of sparsity: short frequency support, polynomially structured sparsity and block sparsity. For all three classes we present new deterministic, sublinear algorithms that recover the Fourier coefficients of periodic functions from samples. We prove theoretical runtime and sampling bounds for all algorithms and also investigate their performance in numerical experiments.
In the second part of this thesis we focus on the reconstruction of vectors with short support from their DCT of type II. We present two different new, deterministic and sublinear algorithms for this problem. The first method is based on inverse discrete Fourier transforms and uses complex arithmetic, whereas the second one utilizes properties of the DCT and only employs real arithmetic. We show theoretical runtime and sampling bounds for both algorithms and compare them numerically in experiments. Furthermore, we generalize the techniques for recovering vectors with short support from their DCT of type II using only real arithmetic to the two-dimensional setting of recovering matrices with block support, also providing theoretical runtime and sampling complexities for the obtained new two-dimensional algorithm
High efficiency block coding techniques for image data.
by Lo Kwok-tung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1992.Includes bibliographical references.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.iiiLIST OF PRINCIPLE SYMBOLS AND ABBREVIATIONS --- p.ivLIST OF FIGURES --- p.viiLIST OF TABLES --- p.ixTABLE OF CONTENTS --- p.xChapter CHAPTER 1 --- IntroductionChapter 1.1 --- Background - The Need for Image Compression --- p.1-1Chapter 1.2 --- Image Compression - An Overview --- p.1-2Chapter 1.2.1 --- Predictive Coding - DPCM --- p.1-3Chapter 1.2.2 --- Sub-band Coding --- p.1-5Chapter 1.2.3 --- Transform Coding --- p.1-6Chapter 1.2.4 --- Vector Quantization --- p.1-8Chapter 1.2.5 --- Block Truncation Coding --- p.1-10Chapter 1.3 --- Block Based Image Coding Techniques --- p.1-11Chapter 1.4 --- Goal of the Work --- p.1-13Chapter 1.5 --- Organization of the Thesis --- p.1-14Chapter CHAPTER 2 --- Block-Based Image Coding TechniquesChapter 2.1 --- Statistical Model of Image --- p.2-1Chapter 2.1.1 --- One-Dimensional Model --- p.2-1Chapter 2.1.2 --- Two-Dimensional Model --- p.2-2Chapter 2.2 --- Image Fidelity Criteria --- p.2-3Chapter 2.2.1 --- Objective Fidelity --- p.2-3Chapter 2.2.2 --- Subjective Fidelity --- p.2-5Chapter 2.3 --- Transform Coding Theroy --- p.2-6Chapter 2.3.1 --- Transformation --- p.2-6Chapter 2.3.2 --- Quantization --- p.2-10Chapter 2.3.3 --- Coding --- p.2-12Chapter 2.3.4 --- JPEG International Standard --- p.2-14Chapter 2.4 --- Vector Quantization Theory --- p.2-18Chapter 2.4.1 --- Codebook Design and the LBG Clustering Algorithm --- p.2-20Chapter 2.5 --- Block Truncation Coding Theory --- p.2-22Chapter 2.5.1 --- Optimal MSE Block Truncation Coding --- p.2-24Chapter CHAPTER 3 --- Development of New Orthogonal TransformsChapter 3.1 --- Introduction --- p.3-1Chapter 3.2 --- Weighted Cosine Transform --- p.3-4Chapter 3.2.1 --- Development of the WCT --- p.3-6Chapter 3.2.2 --- Determination of a and β --- p.3-9Chapter 3.3 --- Simplified Cosine Transform --- p.3-10Chapter 3.3.1 --- Development of the SCT --- p.3-11Chapter 3.4 --- Fast Computational Algorithms --- p.3-14Chapter 3.4.1 --- Weighted Cosine Transform --- p.3-14Chapter 3.4.2 --- Simplified Cosine Transform --- p.3-18Chapter 3.4.3 --- Computational Requirement --- p.3-19Chapter 3.5 --- Performance Evaluation --- p.3-21Chapter 3.5.1 --- Evaluation using Statistical Model --- p.3-21Chapter 3.5.2 --- Evaluation using Real Images --- p.3-28Chapter 3.6 --- Concluding Remarks --- p.3-31Chapter 3.7 --- Note on Publications --- p.3-32Chapter CHAPTER 4 --- Pruning in Transform Coding of ImagesChapter 4.1 --- Introduction --- p.4-1Chapter 4.2 --- "Direct Fast Algorithms for DCT, WCT and SCT" --- p.4-3Chapter 4.2.1 --- Discrete Cosine Transform --- p.4-3Chapter 4.2.2 --- Weighted Cosine Transform --- p.4-7Chapter 4.2.3 --- Simplified Cosine Transform --- p.4-9Chapter 4.3 --- Pruning in Direct Fast Algorithms --- p.4-10Chapter 4.3.1 --- Discrete Cosine Transform --- p.4-10Chapter 4.3.2 --- Weighted Cosine Transform --- p.4-13Chapter 4.3.3 --- Simplified Cosine Transform --- p.4-15Chapter 4.4 --- Operations Saved by Using Pruning --- p.4-17Chapter 4.4.1 --- Discrete Cosine Transform --- p.4-17Chapter 4.4.2 --- Weighted Cosine Transform --- p.4-21Chapter 4.4.3 --- Simplified Cosine Transform --- p.4-23Chapter 4.4.4 --- Generalization Pruning Algorithm for DCT --- p.4-25Chapter 4.5 --- Concluding Remarks --- p.4-26Chapter 4.6 --- Note on Publications --- p.4-27Chapter CHAPTER 5 --- Efficient Encoding of DC Coefficient in Transform Coding SystemsChapter 5.1 --- Introduction --- p.5-1Chapter 5.2 --- Minimum Edge Difference (MED) Predictor --- p.5-3Chapter 5.3 --- Performance Evaluation --- p.5-6Chapter 5.4 --- Simulation Results --- p.5-9Chapter 5.5 --- Concluding Remarks --- p.5-14Chapter 5.6 --- Note on Publications --- p.5-14Chapter CHAPTER 6 --- Efficient Encoding Algorithms for Vector Quantization of ImagesChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- Sub-Codebook Searching Algorithm (SCS) --- p.6-4Chapter 6.2.1 --- Formation of the Sub-codebook --- p.6-6Chapter 6.2.2 --- Premature Exit Conditions in the Searching Process --- p.6-8Chapter 6.2.3 --- Sub-Codebook Searching Algorithm --- p.6-11Chapter 6.3 --- Predictive Sub-Codebook Searching Algorithm (PSCS) --- p.6-13Chapter 6.4 --- Simulation Results --- p.6-17Chapter 6.5 --- Concluding Remarks --- p.5-20Chapter 6.6 --- Note on Publications --- p.6-21Chapter CHAPTER 7 --- Predictive Classified Address Vector Quantization of ImagesChapter 7.1 --- Introduction --- p.7-1Chapter 7.2 --- Optimal Three-Level Block Truncation Coding --- p.7-3Chapter 7.3 --- Predictive Classified Address Vector Quantization --- p.7-5Chapter 7.3.1 --- Classification of Images using Three-level BTC --- p.7-6Chapter 7.3.2 --- Predictive Mean Removal Technique --- p.7-8Chapter 7.3.3 --- Simplified Address VQ Technique --- p.7-9Chapter 7.3.4 --- Encoding Process of PCAVQ --- p.7-13Chapter 7.4 --- Simulation Results --- p.7-14Chapter 7.5 --- Concluding Remarks --- p.7-18Chapter 7.6 --- Note on Publications --- p.7-18Chapter CHAPTER 8 --- Recapitulation and Topics for Future InvestigationChapter 8.1 --- Recapitulation --- p.8-1Chapter 8.2 --- Topics for Future Investigation --- p.8-3REFERENCES --- p.R-1APPENDICESChapter A. --- Statistics of Monochrome Test Images --- p.A-lChapter B. --- Statistics of Color Test Images --- p.A-2Chapter C. --- Fortran Program Listing for the Pruned Fast DCT Algorithm --- p.A-3Chapter D. --- Training Set Images for Building the Codebook of Standard VQ Scheme --- p.A-5Chapter E. --- List of Publications --- p.A-