58 research outputs found

    Image Compression by Wavelet Transform.

    Get PDF
    Digital images are widely used in computer applications. Uncompressed digital images require considerable storage capacity and transmission bandwidth. Efficient image compression solutions are becoming more critical with the recent growth of data intensive, multimedia-based web applications. This thesis studies image compression with wavelet transforms. As a necessary background, the basic concepts of graphical image storage and currently used compression algorithms are discussed. The mathematical properties of several types of wavelets, including Haar, Daubechies, and biorthogonal spline wavelets are covered and the Enbedded Zerotree Wavelet (EZW) coding algorithm is introduced. The last part of the thesis analyzes the compression results to compare the wavelet types

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    Problem-based learning (PBL) awareness among academic staff in Universiti Tun Hussein Onn Malaysia (UTHM)

    Get PDF
    The present study was conducted to determine whether the academic staff in UTHM was aware of Problem-based Learning (PBL) as an instructional approach. It was significant to identify if the academic staff in Universiti Tun Hussein Onn Malaysia (UTHM) had the knowledge about PBL. It was also crucial to know if the academic staff was aware of PBL as a method of teaching their courses in class as this could give the feedback to the university on the use of PBL among academic staff and measures to be taken to help improve their teaching experience. A workshop could also be designed if the academic staff in UTHM was interested to know more about PBL and how it could be used in their classroom. The objective of this study was to identify the awareness of PBL among academic staff in UTHM. This study was conducted via a quantitative method using a questionnaire adapted from the Awareness Questionnaire (AQ). 100 respondents were involved in this study. The findings indicated that the awareness of PBL among UTHM academic staff was moderate. It is a hope that more exposure could be done as PBL is seen as a promising approach in the learning process. In conclusion, the academic staff in UTHM has a moderate level of knowledge about PBL as a teaching methodology

    Context-based compression algorithms for text and image data.

    Get PDF
    Wong Ling.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 80-85).ABSTRACT --- p.1Chapter 1. --- INTRODUCTION --- p.2Chapter 1.1 --- motivation --- p.4Chapter 1.2 --- Original Contributions --- p.5Chapter 1.3 --- thesis Structure --- p.5Chapter 2. --- BACKGROUND --- p.7Chapter 2.1 --- information theory --- p.7Chapter 2.2 --- early compression --- p.8Chapter 2.2.1 --- Some Source Codes --- p.10Chapter 2.2.1.1 --- Huffman Code --- p.10Chapter 2.2.1.2 --- Tutstall Code --- p.10Chapter 2.2.1.3 --- Arithmetic Code --- p.11Chapter 2.3 --- modern techniques for compression --- p.14Chapter 2.3.1 --- Statistical Modeling --- p.14Chapter 2.3.1.1 --- Context Modeling --- p.15Chapter 2.3.1.2 --- State Based Modeling --- p.17Chapter 2.3.2 --- Dictionary Based Compression --- p.17Chapter 2.3.2.1 --- LZ-compression --- p.19Chapter 2.3.3 --- Other Compression Techniques --- p.20Chapter 2.3.3.1 --- Block Sorting --- p.20Chapter 2.3.3.2 --- Context Tree Weighting --- p.21Chapter 3. --- SYMBOL REMAPPING --- p.22Chapter 3. 1 --- reviews on Block Sorting --- p.22Chapter 3.1.1 --- Forward Transformation --- p.23Chapter 3.1.2 --- Inverse Transformation --- p.24Chapter 3.2 --- Ordering Method --- p.25Chapter 3.3 --- discussions --- p.27Chapter 4. --- CONTENT PREDICTION --- p.29Chapter 4.1 --- Prediction and Ranking Schemes --- p.29Chapter 4.1.1 --- Content Predictor --- p.29Chapter 4.1.2 --- Ranking Techn ique --- p.30Chapter 4.2 --- Reviews on Context Sorting --- p.31Chapter 4.2.1 --- Context Sorting basis --- p.31Chapter 4.3 --- General Framework of Content Prediction --- p.31Chapter 4.3.1 --- A Baseline Version --- p.32Chapter 4.3.2 --- Context Length Merge --- p.34Chapter 4.4 --- Discussions --- p.36Chapter 5. --- BOUNDED-LENGTH BLOCK SORTING --- p.38Chapter 5.1 --- block sorting with bounded context length --- p.38Chapter 5.1.1 --- Forward Transformation --- p.38Chapter 5.1.2 --- Reverse Transformation --- p.39Chapter 5.2 --- Locally Adaptive Entropy Coding --- p.43Chapter 5.3 --- discussion --- p.45Chapter 6. --- CONTEXT CODING FOR IMAGE DATA --- p.47Chapter 6.1 --- Digital Images --- p.47Chapter 6.1.1 --- Redundancy --- p.48Chapter 6.2 --- model of a compression system --- p.49Chapter 6.2.1 --- Representation --- p.49Chapter 6.2.2 --- Quantization --- p.50Chapter 6.2.3 --- Lossless coding --- p.51Chapter 6.3 --- The Embedded Zerotree Wavelet Coding --- p.51Chapter 6.3.1 --- Simple Zerotree-like Implementation --- p.53Chapter 6.3.2 --- Analysis of Zerotree Coding --- p.54Chapter 6.3.2.1 --- Linkage between Coefficients --- p.55Chapter 6.3.2.2 --- Design of Uniform Threshold Quantizer with Dead Zone --- p.58Chapter 6.4 --- Extensions on Wavelet Coding --- p.59Chapter 6.4.1 --- Coefficients Scanning --- p.60Chapter 6.5 --- Discussions --- p.61Chapter 7. --- CONCLUSIONS --- p.63Chapter 7.1 --- Future Research --- p.64APPENDIX --- p.65Chapter A --- Lossless Compression Results --- p.65Chapter B --- Image Compression Standards --- p.72Chapter C --- human Visual System Characteristics --- p.75Chapter D --- Lossy Compression Results --- p.76COMPRESSION GALLERY --- p.77Context-based Wavelet Coding --- p.75RD-OPT-based jpeg Compression --- p.76SPIHT Wavelet Compression --- p.77REFERENCES --- p.8

    Hardware Acceleration of the Embedded Zerotree Wavelet Algorithm

    Get PDF
    The goal of this project was to gain experience in designing and implementing a microelectronic system to acclerate the execution of a time-consuming software algorithm, the Embedded Zerotree Wavelet (EZW), which is used in multimedia applications. The algorithm was implemented using MATLAB to be certain it was fully understood and to serve as a validation reference. Then, the algorithm was mapped into a hardware description language, VHDL, and its resulting implementation verified with the golden reference. The hardware description was then targeted to a field-programmable gate array (FPGA). Significant acceleration was achieved since the hardware implementation in a FPGA (Xilinx Virtex-1000E using a 8.315 MHz clock) ran 10,000 times faster than the MATLAB implementation on a SUN-220 workstation. Additional speedup exploiting the parallel capabilities of the FPGA was not achieved since the EZW algorithm utilizes only sequential operations

    A review on region of interest-based hybrid medical image compression algorithms

    Get PDF
    Digital medical images have become a vital resource that supports decision-making and treatment procedures in healthcare facilities. The medical image consumes large sizes of memory, and the size keeps on growth due to the trend of medical image technology. The technology of telemedicine encourages the medical practitioner to share the medical image to support knowledge sharing to diagnose and analyse the image. The healthcare system needs to ensure distributes the medical image accurately with zero loss of information, fast and secure. Image compression is beneficial in ensuring that achieve the goal of sharing this data. The region of interest-based hybrid medical compression algorithm plays the parts to reduce the image size and shorten the time of medical image compression process. Various studies have enhanced by combining numerous techniques to get an ideal result. This paper reviews the previous works conducted on a region of interest-based hybrid medical image compression algorithms

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    High-performance compression of visual information - A tutorial review - Part I : Still Pictures

    Get PDF
    Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful parts. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must be defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare stateof- the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated

    Custom Lossless Compression and High-Quality Lossy Compression of White Blood Cell Microscopy Images for Display and Machine Learning Applications

    Get PDF
    This master's thesis investigates both custom lossless compression and high-quality lossy compression of microscopy images of white blood cells produced by CellaVision's blood analysis systems. A number of different compression strategies have been developed and evaluated, all of which are taking advantage of the specific color filter array used in the sensor in the cameras in the analysis systems. Lossless compression has been the main focus of this thesis. The lossless compression method, of those developed, that gave best result is based on a statistical autoregressive model. A model is constructed for each color channel with external information from the other color channels. The difference between the predictions from the statistical model and the original is further Huffman coded. The method achieves an average bit-rate of 3.0409 bits per pixel on the test set consisting of 604 images. The proposed lossy method is based on taking the difference between the image compressed with an ordinary lossy compression method, JPEG 2000, and the original image. The JPEG 2000 image is saved, as well as the differences at the foreground (i.e. locations with cells), in order to keep the cells identical to the cells in the original image, but allow loss of information for the, not so important, background. This method achieves a bit-rate of 2.4451 bits per pixel, with a peak signal-to-noise-ratio (PSNR) of 48.05 dB
    • …
    corecore