529 research outputs found

    リフティング構造を利用した非分離型ウェーブレット変換のノイズ低減に関する研究

    Get PDF
    国立大学法人長岡技術科学大

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings

    Efficient reconfigurable architectures for 3D medical image compression

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In these fields, medical image compression is important since both efficient storage and transmission of data through high-bandwidth digital communication lines are of crucial importance. Despite their advantages, most 3-D medical imaging algorithms are computationally intensive with matrix transformation as the most fundamental operation involved in the transform-based methods. Therefore, there is a real need for high-performance systems, whilst keeping architectures exible to allow for quick upgradeability with real-time applications. Moreover, in order to obtain efficient solutions for large medical volumes data, an efficient implementation of these operations is of significant importance. Reconfigurable hardware, in the form of field programmable gate arrays (FPGAs) has been proposed as viable system building block in the construction of high-performance systems at an economical price. Consequently, FPGAs seem an ideal candidate to harness and exploit their inherent advantages such as massive parallelism capabilities, multimillion gate counts, and special low-power packages. The key achievements of the work presented in this thesis are summarised as follows. Two architectures for 3-D Haar wavelet transform (HWT) have been proposed based on transpose-based computation and partial reconfiguration suitable for 3-D medical imaging applications. These applications require continuous hardware servicing, and as a result dynamic partial reconfiguration (DPR) has been introduced. Comparative study for both non-partial and partial reconfiguration implementation has shown that DPR offers many advantages and leads to a compelling solution for implementing computationally intensive applications such as 3-D medical image compression. Using DPR, several large systems are mapped to small hardware resources, and the area, power consumption as well as maximum frequency are optimised and improved. Moreover, an FPGA-based architecture of the finite Radon transform (FRAT)with three design strategies has been proposed: direct implementation of pseudo-code with a sequential or pipelined description, and block random access memory (BRAM)- based method. An analysis with various medical imaging modalities has been carried out. Results obtained for image de-noising implementation using FRAT exhibits promising results in reducing Gaussian white noise in medical images. In terms of hardware implementation, promising trade-offs on maximum frequency, throughput and area are also achieved. Furthermore, a novel hardware implementation of 3-D medical image compression system with context-based adaptive variable length coding (CAVLC) has been proposed. An evaluation of the 3-D integer transform (IT) and the discrete wavelet transform (DWT) with lifting scheme (LS) for transform blocks reveal that 3-D IT demonstrates better computational complexity than the 3-D DWT, whilst the 3-D DWT with LS exhibits a lossless compression that is significantly useful for medical image compression. Additionally, an architecture of CAVLC that is capable of compressing high-definition (HD) images in real-time without any buffer between the quantiser and the entropy coder is proposed. Through a judicious parallelisation, promising results have been obtained with limited resources. In summary, this research is tackling the issues of massive 3-D medical volumes data that requires compression as well as hardware implementation to accelerate the slowest operations in the system. Results obtained also reveal a significant achievement in terms of the architecture efficiency and applications performance.Ministry of Higher Education Malaysia (MOHE), Universiti Tun Hussein Onn Malaysia (UTHM) and the British Counci

    Cyclostationary error analysis and filter properties in a 3D wavelet coding framework

    Get PDF
    The reconstruction error due to quantization of wavelet subbands can be modeled as a cyclostationary process because of the linear periodically shift variant property of the inverse wavelet transform. For N-dimensional data, N-dimensional reconstruction error power cyclostationary patterns replicate on the data sample lattice. For audio and image coding applications this fact is of little practical interest since the decoded data is perceived in its wholeness, the error power oscillations on single data elements cannot be seen or heard and a global PSNR error measure is often used to represent the reconstruction quality. A different situation is the one of 3D data (static volumes or video sequences) coding, where decoded data are usually visualized by plane sections and the reconstruction error power is commonly measured by a PSNR[n] sequence, with n representing either a spatial slicing plane (for volumetric data) or the temporal reference frame (for video). In this case, the cyclostationary oscillations on single data elements lead to a global PSNR[n] oscillation and this effect may become a relevant concern. In this paper we study and describe the above phenomena and evaluate their relevance in concrete coding applications. Our analysis is entirely carried out in the original signal domain and can easily be extended to more than three dimensions. We associate the oscillation pattern with the wavelet filter properties in a polyphase framework and we show that a substantial reduction of the oscillation amplitudes can be achieved under a proper selection of the basis functions. Our quantitative model is initially made under high-resolution conditions and then qualitatively extended to all coding rates for the wide family of bit-plane quantization-based coding techniques. Finally, we experimentally validate the proposed models and we perform a subjective evaluation of the visual relevance of the PSNR[n] fluctuations in the cases of medical volumes and video coding

    Evaluation of transform based image coders, using different transforms and techniques in the transform domain

    Get PDF
    This paper addresses the most relevant aspects of lossy image coding techniques, and presents an evaluation study on this subject, using several transforms and different methods in the transform domain. We developed different transform based image coders/decoders (codecs) using different transforms, such as the discrete cosine transform, the discrete wavelet transform and the S transform. Besides JPEG Baseline, we also use other techniques and methods in the transform domain such as a DWT based JPEG-like (JPEG DWT), a JPEG DWT with visual threshold (JPEG-VT), a JPEG–like coder based on the ST, and an EZW coder. The codecs were programmed in MATLAB™, using custom and built-in functions. The structures of the codecs are presented, also as some experimental results which allow us evaluate them, and support this study

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality
    corecore