6 research outputs found

    On the design of fast and efficient wavelet image coders with reduced memory usage

    Full text link
    Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encodeOliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826Palanci

    Motion Scalability for Video Coding with Flexible Spatio-Temporal Decompositions

    Get PDF
    PhDThe research presented in this thesis aims to extend the scalability range of the wavelet-based video coding systems in order to achieve fully scalable coding with a wide range of available decoding points. Since the temporal redundancy regularly comprises the main portion of the global video sequence redundancy, the techniques that can be generally termed motion decorrelation techniques have a central role in the overall compression performance. For this reason the scalable motion modelling and coding are of utmost importance, and specifically, in this thesis possible solutions are identified and analysed. The main contributions of the presented research are grouped into two interrelated and complementary topics. Firstly a flexible motion model with rateoptimised estimation technique is introduced. The proposed motion model is based on tree structures and allows high adaptability needed for layered motion coding. The flexible structure for motion compensation allows for optimisation at different stages of the adaptive spatio-temporal decomposition, which is crucial for scalable coding that targets decoding on different resolutions. By utilising an adaptive choice of wavelet filterbank, the model enables high compression based on efficient mode selection. Secondly, solutions for scalable motion modelling and coding are developed. These solutions are based on precision limiting of motion vectors and creation of a layered motion structure that describes hierarchically coded motion. The solution based on precision limiting relies on layered bit-plane coding of motion vector values. The second solution builds on recently established techniques that impose scalability on a motion structure. The new approach is based on two major improvements: the evaluation of distortion in temporal Subbands and motion search in temporal subbands that finds the optimal motion vectors for layered motion structure. Exhaustive tests on the rate-distortion performance in demanding scalable video coding scenarios show benefits of application of both developed flexible motion model and various solutions for scalable motion coding

    Nonlinear transform coding with lossless polar coordinates

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 51-52).In conventional transform coding, the importance of preserving desirable quantization partition cell shapes prevents one from considering the use of a nonlinear change of variables. If no linear transformation of a given source would yield independent components, this means having to encode it at a rate higher than its entropy, i.e. suboptimally. This thesis proposes a new transform coding technique where the source samples are first uniformly scalar quantized and then transformed with an integer-to-integer approximation to a nonlinear transformation that would give independent components. In particular, we design a family of integer-to-integer approximations to the Cartesian-to-polar transformation and analyze its behavior for high rate transform coding. Among the benefits of such an approach is the ability to achieve redundancy reduction beyond decorrelation without limitation to orthogonal linear transformations of the original variables. A high resolution analysis is given, and for source models inspired by a sensor network application and by image compression, simulations show improvements over conventional transform coding. A comparison to state-of-the-art entropy-coded polar quantization techniques is also provided.by Demba Elimane Ba.S.M

    Investigating Polynomial Fitting Schemes for Image Compression

    Get PDF
    Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration

    Solutions to non-stationary problems in wavelet space.

    Get PDF

    Nonlinear approximation with redundant multi-component dictionaries

    Get PDF
    The problem of efficiently representing and approximating digital data is an open challenge and it is of paramount importance for many applications. This dissertation focuses on the approximation of natural signals as an organized combination of mutually connected elements, preserving and at the same time benefiting from their inherent structure. This is done by decomposing a signal onto a multi-component, redundant collection of functions (dictionary), built by the union of several subdictionaries, each of which is designed to capture a specific behavior of the signal. In this way, instead of representing signals as a superposition of sinusoids or wavelets many alternatives are available. In addition, since dictionaries we are interested in are overcomplete, the decomposition is non-unique. This gives us the possibility of adaptation, choosing among many possible representations the one which best fits our purposes. On the other hand, it also requires more complex approximation techniques whose theoretical decomposition capacity and computational load have to be carefully studied. In general, we aim at representing a signal with few and meaningful components. If we are able to represent a piece of information by using only few elements, it means that such elements can capture its main characteristics, allowing to compact the energy carried by a signal into the smallest number of terms. In such a framework, this work also proposes analysis methods which deal with the goal of considering the a priori information available when decomposing a structured signal. Indeed, a natural signal is not only an array of numbers, but an expression of a physical event about which we usually have a deep knowledge. Therefore, we claim that it is worth exploiting its structure, since it can be advantageous not only in helping the analysis process, but also in making the representation of such information more accessible and meaningful. The study of an adaptive image representation inspired and gave birth to this work. We often refer to images and visual information throughout the course of the dissertation. However, the proposed approximation setting extends to many different kinds of structured data and examples are given involving videos and electrocardiogram signals. An important part of this work is constituted by practical applications: first of all we provide very interesting results for image and video compression. Then, we also face the problem of signal denoising and, finally, promising achievements in the field of source separation are presented
    corecore