985 research outputs found

    Wavelet Based Image Coding Schemes : A Recent Survey

    Full text link
    A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.Comment: 18 pages, 7 figures, journa

    Problem-based learning (PBL) awareness among academic staff in Universiti Tun Hussein Onn Malaysia (UTHM)

    Get PDF
    The present study was conducted to determine whether the academic staff in UTHM was aware of Problem-based Learning (PBL) as an instructional approach. It was significant to identify if the academic staff in Universiti Tun Hussein Onn Malaysia (UTHM) had the knowledge about PBL. It was also crucial to know if the academic staff was aware of PBL as a method of teaching their courses in class as this could give the feedback to the university on the use of PBL among academic staff and measures to be taken to help improve their teaching experience. A workshop could also be designed if the academic staff in UTHM was interested to know more about PBL and how it could be used in their classroom. The objective of this study was to identify the awareness of PBL among academic staff in UTHM. This study was conducted via a quantitative method using a questionnaire adapted from the Awareness Questionnaire (AQ). 100 respondents were involved in this study. The findings indicated that the awareness of PBL among UTHM academic staff was moderate. It is a hope that more exposure could be done as PBL is seen as a promising approach in the learning process. In conclusion, the academic staff in UTHM has a moderate level of knowledge about PBL as a teaching methodology

    Wavelet-Based Embedded Rate Scalable Still Image Coders: A review

    Get PDF
    Embedded scalable image coding algorithms based on the wavelet transform have received considerable attention lately in academia and in industry in terms of both coding algorithms and standards activity. In addition to providing a very good coding performance, the embedded coder has the property that the bit stream can be truncated at any point and still decodes a reasonably good image. In this paper we present some state-of-the-art wavelet-based embedded rate scalable still image coders. In addition, the JPEG2000 still image compression standard is presented.

    Survey of Hybrid Image Compression Techniques

    Get PDF
    A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    A State Table SPHIT Approach for Modified Curvelet-based Medical Image Compression

    Get PDF
    Medical imaging plays a significant role in clinical practice. Storing and transferring a large volume of images can be complex and inefficient. This paper presents the development of a new compression technique that combines the fast discrete curvelet transform (FDCvT) with state table set partitioning in the hierarchical trees (STS) encoding scheme. The curvelet transform is an extension of the wavelet transform algorithm that represents data based on scale and position. Initially, the medical image was decomposed using the FDCvT algorithm. The FDCvT algorithm creates symmetrical values for the detail coefficients, and these coefficients are modified to improve the efficiency of the algorithm. The curvelet coefficients are then encoded using the STS and differential pulse-code modulation (DPCM). The greatest amount of energy is contained in the coarse coefficients, which are encoded using the DPCM method. The finest and modified detail coefficients are encoded using the STS method. A variety of medical modalities, including computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI), are used to verify the performance of the proposed technique. Various quality metrics, including peak signal-to-noise ratio (PSNR), compression ratio (CR), and structural similarity index (SSIM), are used to evaluate the compression results. Additionally, the computation time for the encoding (ET) and decoding (DT) processes is measured. The experimental results showed that the PET image obtained higher values of the PSNR and CR. The CT image provides high quality for the reconstructed image, with an SSIM value of 0.96 and the fastest ET of 0.13 seconds. The MRI image has the shortest DT, which is 0.23 seconds

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph
    corecore