144 research outputs found

    Novel Color Image Compression Algorithm Based-On Quadtree

    Get PDF
    This paper presents a novel algorithm having two image processing systems that have the ability to compress the colour image. The proposed systems divides the colour image into RGB components, each component is selected to be divided. The division processes of the component into blocks are based on quad tree method. For each selection, the other two components are divided using the same blocks coordinates of the selected divided component. In the first system, every block has three minimum values and three difference values. While the other system, every block has three minimum values and one average difference. From experiments, it is found that the division according to the G component is the best giving good visual quality of the compressed images with appropriate compression ratios. It is also noticed, the performance of the second system is better than the first one. The obtained compression ratios ofthe second system are between 1.3379 and 5.0495 at threshold value 0.1, and between 2.3476 and 8.9713 at threshold value 0.2

    Development of Novel Image Compression Algorithms for Portable Multimedia Applications

    Get PDF
    Portable multimedia devices such as digital camera, mobile d evices, personal digtal assistants (PDAs), etc. have limited memory, battery life and processing power. Real time processing and transmission using these devices requires image compression algorithms that can compress efficiently with reduced complexity. Due to limited resources, it is not always possible to implement the best algorithms inside these devices. In uncompressed form, both raw and image data occupy an unreasonably large space. However, both raw and image data have a significant amount of statistical and visual redundancy. Consequently, the used storage space can be efficiently reduced by compression. In this thesis, some novel low complexity and embedded image compression algorithms are developed especially suitable for low bit rate image compression using these devices. Despite the rapid progress in the Internet and multimedia technology, demand for data storage and data transmission bandwidth continues to outstrip the capabil- ities of available technology. The browsing of images over In ternet from the image data sets using these devices requires fast encoding and decodin g speed with better rate-distortion performance. With progressive picture build up of the wavelet based coded images, the recent multimedia applications demand goo d quality images at the earlier stages of transmission. This is particularly important if the image is browsed over wireless lines where limited channel capacity, storage and computation are the deciding parameters. Unfortunately, the performance of JPEG codec degrades at low bit rates because of underlying block based DCT transforms. Altho ugh wavelet based codecs provide substantial improvements in progressive picture quality at lower bit rates, these coders do not fully exploit the coding performance at lower bit rates. It is evident from the statistics of transformed images that the number of significant coefficients having magnitude higher than earlier thresholds are very few. These wavelet based codecs code zero to each insignificant subband as it moves from coarsest to finest subbands. It is also demonstrated that there could be six to sev en bit plane passes where wavelet coders encode many zeros as many subbands are likely to be insignificant with respect to early thresholds. Bits indicating insignificance of a coefficient or subband are required, but they don’t code information that reduces distortion of the reconstructed image. This leads to reduction of zero distortion for an increase in non zero bit-rate. Another problem associated with wavelet based coders such as Set partitioning in hierarchical trees (SPIHT), Set partitioning embedded block (SPECK), Wavelet block-tree coding (WBTC) is because of the use of auxiliary lists. The size of list data structures increase exponentially as more and more eleme nts are added, removed or moved in each bitplane pass. This increases the dynamic memory requirement of the codec, which is a less efficient feature for hardware implementations. Later, many listless variants of SPIHT and SPECK, e.g. No list SPIHT (NLS) and Listless SPECK (LSK) respectively are developed. However, these algorithms have similar rate distortion performances, like the list based coders. An improved LSK (ILSK)algorithm proposed in this dissertation that improves the low b it rate performance of LSK by encoding much lesser number of symbols (i.e. zeros) to several insignificant subbands. Further, the ILSK is combined with a block based transform known as discrete Tchebichef transform (DTT). The proposed new coder isnamed as Hierar-chical listless DTT (HLDTT). DTT is chosen over DCT because of it’s similar energy compaction property like discrete cosine transform (DCT). It is demonstrated that the decoded image quality using HLDTT has better visual performance (i.e., Mean Structural Similarity) than the images decoded using DCT based embedded coders in most of the bit rates. The ILSK algorithm is also combined with Lift based wavelet tra nsform to show the superiority over JPEG2000 at lower rates in terms of peak signal-to-noise ratio (PSNR). A full-scalable and random access decodable listless algorithm is also developed which is based on lift based ILSK. The proposed algorithm named as scalable listless embedded block partitioning (S-LEBP) generates bit stream that offer increasing signal-to-noise ratio and spatial resolution. These are very useful features for transmission of images in a heterogeneous network that optimally service each user according to available bandwidth and computing needs. Random access decoding is a very useful feature for extracting/manipulating certain ar ea of an image with minimal decoding work. The idea used in ILSK is also extended to encode and decode color images. The proposed algorithm for coding color images is named as Color listless embedded block partitioning (CLEBP) algorithm. The coding efficiency of CLEBP is compared with Color SPIHT (CSPIHT) and color variant of WBTC algorithm. From the simulation results, it is shown that CLEBP exhibits a significant PSNR performance improvement over the later two algorithms on various types of images. Although many modifications to NLS and LSK have been made, the listless modification to WBTC algorithm has not been reported in the literature. Therefore,a listless variant of WBTC (named as LBTC) algorithm is proposed. LBTC not only reduces the memory requirement by 88-89% but also increases the encoding and decoding speed, while preserving the rate-distortion perform ance at the same time. Further, the combination of DCT with LBTC (named as DCT LBT) and DTT with LBTC (named as Hierarchical listless DTT, HLBTDTT) are compared with some state-of-the-art DCT based embedded coders. It is also shown that the proposed DCT-LBT and HLBT-DTT show significant PSNR improvements over almost all the embedded coders in most of the bit rates. In some multimedia applications e.g., digital camera, camco rders etc., the images always need to have a fixed pre-determined high quality. The extra effort required for quality scalability is wasted. Therefore, non-embedded algo rithms are best suited for these applications. The proposed algorithms can be made non-embedded by encoding a fixed set of bit planes at a time. Instead, a sparse orthogonal transform matrix is proposed, which can be integrated in a JEPG baseline coder. The proposed matrix promises a substantial reduction in hardware complexity with amarginal loss of image quality on a considerable range of bit rates than block based DCT or Integer DCT

    Efficient VLSI Architectures for Image Compression Algorithms

    Get PDF
    An image, in its original form, contains huge amount of data which demands not only large amount of memory requirements for its storage but also causes inconvenient transmission over limited bandwidth channel. Image compression reduces the data from the image in either lossless or lossy way. While lossless image compression retrieves the original image data completely, it provides very low compression. Lossy compression techniques compress the image data in variable amount depending on the quality of image required for its use in particular application area. It is performed in steps such as image transformation, quantization and entropy coding. JPEG is one of the most used image compression standard which uses discrete cosine transform (DCT) to transform the image from spatial to frequency domain. An image contains low visual information in its high frequencies for which heavy quantization can be done in order to reduce the size in the transformed representation. Entropy coding follows to further reduce the redundancy in the transformed and quantized image data. Real-time data processing requires high speed which makes dedicated hardware implementation most preferred choice. The hardware of a system is favored by its lowcost and low-power implementation. These two factors are also the most important requirements for the portable devices running on battery such as digital camera. Image transform requires very high computations and complete image compression system is realized through various intermediate steps between transform and final bit-streams. Intermediate stages require memory to store intermediate results. The cost and power of the design can be reduced both in efficient implementation of transforms and reduction/removal of intermediate stages by employing different techniques. The proposed research work is focused on the efficient hardware implementation of transform based image compression algorithms by optimizing the architecture of the system. Distribute arithmetic (DA) is an efficient approach to implement digital signal processing algorithms. DA is realized by two different ways, one through storage of precomputed values in ROMs and another without ROM requirements. ROM free DA is more efficient. For the image transform, architectures of one dimensional discrete Hartley transform (1-D DHT) and one dimensional DCT (1-D DCT) have been optimized using ROM free DA technique. Further, 2-D separable DHT (SDHT) and 2-D DCT architectures have been implemented in row-column approach using two 1-D DHT and two 1-D DCT respectively. A finite state machine (FSM) based architecture from DCT to quantization has been proposed using the modified quantization matrix in JPEG image compression which requires no memory in storage of quantization table and DCT coefficients. In addition, quantization is realized without use of multipliers that require more area and are power hungry. For the entropy encoding, Huffman coding is hardware efficient than arithmetic coding. The use of Huffman code table further simplifies the implementation. The strategies have been used for the significant reduction of memory bits in storage of Huffman code table and the complete Huffman coding architecture encodes the transformed coefficients one bit per clock cycle. Direct implementation algorithm of DCT has the advantage that it is free of transposition memory to store intermediate 1-D DCT. Although recursive algorithms have been a preferred method, these algorithms have low accuracy resulting in image quality degradation. A non-recursive equation for the direct computation of DCT coefficients have been proposed and implemented in both 0.18 µm ASIC library as well as FPGA. It can compute DCT coefficients in any order and all intermediate computations are free of fractions and hence very high image quality has been obtained in terms of PSNR. In addition, one multiplier and one register bit-width need to be changed for increasing the accuracy resulting in very low hardware overhead. The architecture implementation has been done to obtain zig-zag ordered DCT coefficients. The comparison results show that this implementation has less area in terms of gate counts and less power consumption than the existing DCT implementations. Using this architecture, the complete JPEG image compression system has been implemented which has Huffman coding module, one multiplier and one register as the only additional modules. The intermediate stages (DCT to Huffman encoding) are free of memory, hence efficient architecture is obtained

    Efficient architectures for multidimensional discrete transforms in image and video processing applications

    Get PDF
    PhD ThesisThis thesis introduces new image compression algorithms, their related architectures and data transforms architectures. The proposed architectures consider the current hardware architectures concerns, such as power consumption, hardware usage, memory requirement, computation time and output accuracy. These concerns and problems are crucial in multidimensional image and video processing applications. This research is divided into three image and video processing related topics: low complexity non-transform-based image compression algorithms and their architectures, architectures for multidimensional Discrete Cosine Transform (DCT); and architectures for multidimensional Discrete Wavelet Transform (DWT). The proposed architectures are parameterised in terms of wordlength, pipelining and input data size. Taking such parameterisation into account, efficient non-transform based and low complexity image compression algorithms for better rate distortion performance are proposed. The proposed algorithms are based on the Adaptive Quantisation Coding (AQC) algorithm, and they achieve a controllable output bit rate and accuracy by considering the intensity variation of each image block. Their high speed, low hardware usage and low power consumption architectures are also introduced and implemented on Xilinx devices. Furthermore, efficient hardware architectures for multidimensional DCT based on the 1-D DCT Radix-2 and 3-D DCT Vector Radix (3-D DCT VR) fast algorithms have been proposed. These architectures attain fast and accurate 3-D DCT computation and provide high processing speed and power consumption reduction. In addition, this research also introduces two low hardware usage 3-D DCT VR architectures. Such architectures perform the computation of butterfly and post addition stages without using block memory for data transposition, which in turn reduces the hardware usage and improves the performance of the proposed architectures. Moreover, parallel and multiplierless lifting-based architectures for the 1-D, 2-D and 3-D Cohen-Daubechies-Feauveau 9/7 (CDF 9/7) DWT computation are also introduced. The presented architectures represent an efficient multiplierless and low memory requirement CDF 9/7 DWT computation scheme using the separable approach. Furthermore, the proposed architectures have been implemented and tested using Xilinx FPGA devices. The evaluation results have revealed that a speed of up to 315 MHz can be achieved in the proposed AQC-based architectures. Further, a speed of up to 330 MHz and low utilisation rate of 722 to 1235 can be achieved in the proposed 3-D DCT VR architectures. In addition, in the proposed 3-D DWT architecture, the computation time of 3-D DWT for data size of 144×176×8-pixel is less than 0.33 ms. Also, a power consumption of 102 mW at 50 MHz clock frequency using 256×256-pixel frame size is achieved. The accuracy tests for all architectures have revealed that a PSNR of infinite can be attained
    corecore