11 research outputs found

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Motion Scalability for Video Coding with Flexible Spatio-Temporal Decompositions

    Get PDF
    PhDThe research presented in this thesis aims to extend the scalability range of the wavelet-based video coding systems in order to achieve fully scalable coding with a wide range of available decoding points. Since the temporal redundancy regularly comprises the main portion of the global video sequence redundancy, the techniques that can be generally termed motion decorrelation techniques have a central role in the overall compression performance. For this reason the scalable motion modelling and coding are of utmost importance, and specifically, in this thesis possible solutions are identified and analysed. The main contributions of the presented research are grouped into two interrelated and complementary topics. Firstly a flexible motion model with rateoptimised estimation technique is introduced. The proposed motion model is based on tree structures and allows high adaptability needed for layered motion coding. The flexible structure for motion compensation allows for optimisation at different stages of the adaptive spatio-temporal decomposition, which is crucial for scalable coding that targets decoding on different resolutions. By utilising an adaptive choice of wavelet filterbank, the model enables high compression based on efficient mode selection. Secondly, solutions for scalable motion modelling and coding are developed. These solutions are based on precision limiting of motion vectors and creation of a layered motion structure that describes hierarchically coded motion. The solution based on precision limiting relies on layered bit-plane coding of motion vector values. The second solution builds on recently established techniques that impose scalability on a motion structure. The new approach is based on two major improvements: the evaluation of distortion in temporal Subbands and motion search in temporal subbands that finds the optimal motion vectors for layered motion structure. Exhaustive tests on the rate-distortion performance in demanding scalable video coding scenarios show benefits of application of both developed flexible motion model and various solutions for scalable motion coding

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    Lifting transforms on graphs and their application to video coding

    Get PDF
    Compact representations of data are very useful in many applications such as coding, denoising or feature extraction. “Classical” transforms such as Discrete Cosine Transforms (DCT) or Discrete Wavelets Transforms (DWT) provide sparse approximations of smooth signals, but lose efficiency when they are applied to signals with large discontinuities. In such cases, directional transforms, which are able to adapt their basis functions to the underlying signal structure, improve the performance of “classical” transforms. In this PhD Thesis we describe a general class of lifting transforms on graphs that can be seen as N-dimensional directional transforms. Graphs are constructed so that every node corresponds to a specific sample point of a discrete N-dimensional signal and links between nodes represent correlation between samples. Therefore, non-correlated samples (e.g., samples across a large discontinuity in the signal) should not be linked. We propose a lifting-based directional transform that can be applied to any undirected graph. In this transform, filtering operations are performed following highcorrelation directions (indicated by the links between nodes), thus avoiding filtering across large discontinuities that give rise to large high-pass coefficients in those locations. In this way, the transform efficiently exploits the correlation that exists between data on the graph, leading to a more compact representation. We mainly focus on the design and optimization of these lifting transforms on graphs, studying and discussing the three main steps required to obtain an invertible and critically sampled transform: (i) graph construction, (ii) design of “good” graph bipartitions, and (iii) filter design. We also explain how to extend the transform to J levels of decomposition, obtaining a multiresolution analysis of the original N-dimensional signal. The proposed transform has many desirable properties, such as perfect reconstruction, critically-sampled, easy generalization to N-dimensional domains, non-separable and one-dimensional filtering operations, localization in frequency and in the original domain, and the ability to choose any filtering direction. As an application, we develop a graph-based video encoder where the goal is to obtain a compact representation of the original video sequence. To this end, we first propose a graph-representation of the video sequence and then design a 3-dimensional (spatio-temporal) non-separable directional transform. This can be viewed as an extension of wavelet transform-based video encoders that operate in the spatial and in the temporal domains independently. Our transform yields better compaction ability (in terms of non-linear approximation) than a state of the art motion-compensated temporal filtering transform (which can be interpreted as a temporal wavelet transform) and a comparable hybrid Discrete Cosine Transform (DCT)-based video encoder (which is the basis of the latest video coding standards). In order to obtain a complete video encoder, the transform coefficients and the side information (needed to obtain an invertible scheme) should be entropy coded and sent to the decoder. Therefore, we also propose a coefficient-reordering method based on the information of the graph which allows to improve the compression ability of the entropy encoder. Furthermore, we design two different low-cost approaches which aim to reduce the extensive computational complexity of the proposed system without causing significant losses of compression performance. The proposed complete system leads to an efficient encoder which significantly outperforms a comparable hybrid DCT-based encoder in rate-distortion terms. Finally, we investigate how rate-distortion optimization can be applied to the proposed coding scheme.La representación compacta de señales resulta útil en diversas aplicaciones, tales como compresión, reducción de ruido, o extracción de características. Transformadas “clásicas” como la Transformada Discreta del Coseno (DCT) o la TransformadaWavelet Discreta (DWT) logran aproximaciones compactas de señales suaves, pero pierden su eficiencia al ser aplicadas sobre se˜nales que contienen grandes discontinuidades. En estos casos, las transformadas direccionales, capaces de adaptar sus funciones base a la estructura de la señal a analizar, mejoran la eficiencia de las transformadas “clásicas”. En esta tesis nos centramos en el diseño y optimización de transformadas “lifting” sobre grafos, las cuales pueden ser interpretadas como transformadas direccionales N-dimensionales. Los grafos son construidos demanera que cada nodo se corresponde con una muestra específica de una señal discreta N-dimensional, y los enlaces entre los nodos representan correlación entre muestras. Así, muestras no correlacionadas (por ejemplo, muestras que se encuentran a ambos lados de una discontinuidad) no deberían estar unidas. Sobre el grafo formado aplicaremos transformadas basadas en el esquema “lifting”, en las que las operaciones de filtrado se realizan siguiendo las direcciones indicadas por los enlaces entre nodos (direcciones de alta correlación). De esta manera, evitaremos filtrar cruzando a través de largas discontinuidades (lo que resultaría en coeficientes con alto valor en dichas discontinuidades), dando lugar a una transformada direccional que explota la correlación que existe entre las muestras de la señal en el grafo, obteniendo una representación compacta de dicha señal. En esta tesis nos centramos, principalmente, en investigar los tres principales pasos requeridos para obtener una transformada direccional basada en el esquema “lifting” aplicado en grafos: (i) la construcción del grafo, (ii) el diseño de biparticiones del grafo, y (iii) la definición de los filtros. El buen diseño de estos tres procesos determinará, entre otras cosas, la capacidad para compactar la energía de la transformada. También explicamos cómo extender este tipo de transformadas a J niveles de descomposición, obteniendo un análisis multi-resolución de la señal N-dimensional original. La transformada propuesta tiene muchas propiedades deseables, tales como reconstrucción perfecta, muestreo crítico, fácil generalización a dominios N-dimensionales, operaciones de filtrado no separables y unidimensionales, localización en frecuencia y en el dominio original, y capacidad de elegir cualquier dirección de filtrado. Como aplicación, desarrollamos un codificador de vídeo basado en grafos donde el objetivo es obtener una versión compacta de la señal de vídeo original. Para ello, primero proponemos una representación en grafos de la secuencia de vídeo y luego diseñamos transformadas no separables direccionales 3-dimensionales (espacio-tiempo). Nuestro codificador puede interpretarse como una extensión de los codificadores de vídeo basados en “wavelets”, los cuales operan independientemente (de forma separable) en el dominio espacial y en el temporal. La transformada propuesta consigue mejores resultados (en términos de aproximación no lineal) que un método del estado del arte basado en “wavelets” temporales compensadas en movimiento, y un codificador DCT comparable (base de los últimos estándares de codificación de vídeo). Para conseguir un codificador de vídeo completo, los coeficientes resultantes de la transformada y la información secundaria (necesaria para obtener un esquema invertible) deben ser codificados entrópicamente y enviados al decodificador. Por ello, también proponemos en esta tesis un método de reordenación de los coeficientes basado en la información del grafo que permite mejorar la capacidad de compresión del codificador entrópico. El esquema de codificación propuesto mejora significativamente la eficiencia de un codificador híbrido basado en DCT en términos de tasa-distorsión. Sin embargo, nuestro método tiene la desventaja de su gran complejidad computacional. Para tratar de paliar este problema, diseñamos dos algoritmos que tratan de reducir dicha complejidad sin que ello afecte en la capacidad de compresión. Finalmente, investigamos como realizar optimización tasa-distorsión sobre el codificador basado en grafos propuesto

    Toward sparse and geometry adapted video approximations

    Get PDF
    Video signals are sequences of natural images, where images are often modeled as piecewise-smooth signals. Hence, video can be seen as a 3D piecewise-smooth signal made of piecewise-smooth regions that move through time. Based on the piecewise-smooth model and on related theoretical work on rate-distortion performance of wavelet and oracle based coding schemes, one can better analyze the appropriate coding strategies that adaptive video codecs need to implement in order to be efficient. Efficient video representations for coding purposes require the use of adaptive signal decompositions able to capture appropriately the structure and redundancy appearing in video signals. Adaptivity needs to be such that it allows for proper modeling of signals in order to represent these with the lowest possible coding cost. Video is a very structured signal with high geometric content. This includes temporal geometry (normally represented by motion information) as well as spatial geometry. Clearly, most of past and present strategies used to represent video signals do not exploit properly its spatial geometry. Similarly to the case of images, a very interesting approach seems to be the decomposition of video using large over-complete libraries of basis functions able to represent salient geometric features of the signal. In the framework of video, these features should model 2D geometric video components as well as their temporal evolution, forming spatio-temporal 3D geometric primitives. Through this PhD dissertation, different aspects on the use of adaptivity in video representation are studied looking toward exploiting both aspects of video: its piecewise nature and the geometry. The first part of this work studies the use of localized temporal adaptivity in subband video coding. This is done considering two transformation schemes used for video coding: 3D wavelet representations and motion compensated temporal filtering. A theoretical R-D analysis as well as empirical results demonstrate how temporal adaptivity improves coding performance of moving edges in 3D transform (without motion compensation) based video coding. Adaptivity allows, at the same time, to equally exploit redundancy in non-moving video areas. The analogy between motion compensated video and 1D piecewise-smooth signals is studied as well. This motivates the introduction of local length adaptivity within frame-adaptive motion compensated lifted wavelet decompositions. This allows an optimal rate-distortion performance when video motion trajectories are shorter than the transformation "Group Of Pictures", or when efficient motion compensation can not be ensured. After studying temporal adaptivity, the second part of this thesis is dedicated to understand the fundamentals of how can temporal and spatial geometry be jointly exploited. This work builds on some previous results that considered the representation of spatial geometry in video (but not temporal, i.e, without motion). In order to obtain flexible and efficient (sparse) signal representations, using redundant dictionaries, the use of highly non-linear decomposition algorithms, like Matching Pursuit, is required. General signal representation using these techniques is still quite unexplored. For this reason, previous to the study of video representation, some aspects of non-linear decomposition algorithms and the efficient decomposition of images using Matching Pursuits and a geometric dictionary are investigated. A part of this investigation concerns the study on the influence of using a priori models within approximation non-linear algorithms. Dictionaries with a high internal coherence have some problems to obtain optimally sparse signal representations when used with Matching Pursuits. It is proved, theoretically and empirically, that inserting in this algorithm a priori models allows to improve the capacity to obtain sparse signal approximations, mainly when coherent dictionaries are used. Another point discussed in this preliminary study, on the use of Matching Pursuits, concerns the approach used in this work for the decompositions of video frames and images. The technique proposed in this thesis improves a previous work, where authors had to recur to sub-optimal Matching Pursuit strategies (using Genetic Algorithms), given the size of the functions library. In this work the use of full search strategies is made possible, at the same time that approximation efficiency is significantly improved and computational complexity is reduced. Finally, a priori based Matching Pursuit geometric decompositions are investigated for geometric video representations. Regularity constraints are taken into account to recover the temporal evolution of spatial geometric signal components. The results obtained for coding and multi-modal (audio-visual) signal analysis, clarify many unknowns and show to be promising, encouraging to prosecute research on the subject

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    ECG analysis and classification using CSVM, MSVM and SIMCA classifiers

    Get PDF
    Reliable ECG classification can potentially lead to better detection methods and increase accurate diagnosis of arrhythmia, thus improving quality of care. This thesis investigated the use of two novel classification algorithms: CSVM and SIMCA, and assessed their performance in classifying ECG beats. The project aimed to introduce a new way to interactively support patient care in and out of the hospital and develop new classification algorithms for arrhythmia detection and diagnosis. Wave (P-QRS-T) detection was performed using the WFDB Software Package and multiresolution wavelets. Fourier and PCs were selected as time-frequency features in the ECG signal; these provided the input to the classifiers in the form of DFT and PCA coefficients. ECG beat classification was performed using binary SVM. MSVM, CSVM, and SIMCA; these were subsequently used for simultaneously classifying either four or six types of cardiac conditions. Binary SVM classification with 100% accuracy was achieved when applied on feature-reduced ECG signals from well-established databases using PCA. The CSVM algorithm and MSVM were used to classify four ECG beat types: NORMAL, PVC, APC, and FUSION or PFUS; these were from the MIT-BIH arrhythmia database (precordial lead group and limb lead II). Different numbers of Fourier coefficients were considered in order to identify the optimal number of features to be presented to the classifier. SMO was used to compute hyper-plane parameters and threshold values for both MSVM and CSVM during the classifier training phase. The best classification accuracy was achieved using fifty Fourier coefficients. With the new CSVM classifier framework, accuracies of 99%, 100%, 98%, and 99% were obtained using datasets from one, two, three, and four precordial leads, respectively. In addition, using CSVM it was possible to successfully classify four types of ECG beat signals extracted from limb lead simultaneously with 97% accuracy, a significant improvement on the 83% accuracy achieved using the MSVM classification model. In addition, further analysis of the following four beat types was made: NORMAL, PVC, SVPB, and FUSION. These signals were obtained from the European ST-T Database. Accuracies between 86% and 94% were obtained for MSVM and CSVM classification, respectively, using 100 Fourier coefficients for reconstructing individual ECG beats. Further analysis presented an effective ECG arrhythmia classification scheme consisting of PCA as a feature reduction method and a SIMCA classifier to differentiate between either four or six different types of arrhythmia. In separate studies, six and four types of beats (including NORMAL, PVC, APC, RBBB, LBBB, and FUSION beats) with time domain features were extracted from the MIT-BIH arrhythmia database and the St Petersburg INCART 12-lead Arrhythmia Database (incartdb) respectively. Between 10 and 30 PCs, coefficients were selected for reconstructing individual ECG beats in the feature selection phase. The average classification accuracy of the proposed scheme was 98.61% and 97.78 % using the limb lead and precordial lead datasets, respectively. In addition, using MSVM and SIMCA classifiers with four ECG beat types achieved an average classification accuracy of 76.83% and 98.33% respectively. The effectiveness of the proposed algorithms was finally confirmed by successfully classifying both the six beat and four beat types of signal respectively with a high accuracy ratio

    A general scheme for finding the static rate–distortion optimized ordering for the bits of the coefficients of all subbands of an N-level dyadic biorthogonal DWT

    No full text
    The expected distortion decrease per bit attained by conveying the significance and refinement information of transform coefficients is derived taking into consideration their Probability Distribution Function (PDF) and their entropy. A general scheme for finding the static rate–distortion optimized ordering for the bits of the coefficients for all subbands of a generic -level dyadic biorthogonal Discrete Wavelet Transform (DWT) is given by weighting their distortion decrease per bit using the gain of each decomposition subband. Specific formulation for the Exponential Power Distribution (EPD) family is given and closed formulae are derived for the special cases of the Uniform and Laplace distributions. It is shown that under certain circumstances some refinement information of larger magnitude coefficients should be conveyed before the significant information of the current ones. The results can be applied by both conventional context modeling entropy coding algorithms or by set-partitioning algorithms which use multiple lists to keep track of significant and refinement coefficients. A very fast set-partitioning compression algorithm (Depth Embedded Block Tree — DEBT) has been developed based on variable-depth blocks and trees which uses the results presented here and achieves excellent compression ratios along with many other desired properties, e.g., embedded rate–distortion optimized lossy or lossless stream, quality or resolution scalability, and region of interest, among others. Tests show that the rate–distortion curves of most images show a significant improvement when using the weighting derived by the appropriate Exponential Power Distribution fitting in comparison to using a priori standard Uniform or Laplace Probability Distribution Function when using a non orthogonal Discrete Wavelet Transform

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
    corecore