57 research outputs found

    The JPEG2000 still image coding system: An overview

    Get PDF
    With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce

    Implementation of Image Compression Algorithm using Verilog with Area, Power and Timing Constraints

    Get PDF
    Image compression is the application of Data compression on digital images. A fundamental shift in the image compression approach came after the Discrete Wavelet Transform (DWT) became popular. To overcome the inefficiencies in the JPEG standard and serve emerging areas of mobile and Internet communications, the new JPEG2000 standard has been developed based on the principles of DWT. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. Using Verilog HDL, the encoder for the image compression employing DWT was implemented. Detailed analysis for power, timing and area was done for Booth multiplier which forms the major building block in implementing DWT. The encoding technique exploits the zero tree structure present in the bitplanes to compress the transform coefficients

    The JPEG2000 still image compression standard

    Get PDF
    The development of standards (emerging and established) by the International Organization for Standardization (ISO), the International Telecommunications Union (ITU), and the International Electrotechnical Commission (IEC) for audio, image, and video, for both transmission and storage, has led to worldwide activity in developing hardware and software systems and products applicable to a number of diverse disciplines [7], [22], [23], [55], [56], [73]. Although the standards implicitly address the basic encoding operations, there is freedom and flexibility in the actual design and development of devices. This is because only the syntax and semantics of the bit stream for decoding are specified by standards, their main objective being the compatibility and interoperability among the systems (hardware/software) manufactured by different companies. There is, thus, much room for innovation and ingenuity. Since the mid 1980s, members from both the ITU and the ISO have been working together to establish a joint international standard for the compression of grayscale and color still images. This effort has been known as JPEG, the Join

    Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes

    Get PDF
    abstract: Digital imaging and image processing technologies have revolutionized the way in which we capture, store, receive, view, utilize, and share images. In image-based applications, through different processing stages (e.g., acquisition, compression, and transmission), images are subjected to different types of distortions which degrade their visual quality. Image Quality Assessment (IQA) attempts to use computational models to automatically evaluate and estimate the image quality in accordance with subjective evaluations. Moreover, with the fast development of computer vision techniques, it is important in practice to extract and understand the information contained in blurred images or regions. The work in this dissertation focuses on reduced-reference visual quality assessment of images and textures, as well as perceptual-based spatially-varying blur detection. A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The proposed method requires a very small number of reduced-reference (RR) features. Extensive experiments performed on different benchmark databases demonstrate that the proposed RRIQA method, delivers highly competitive performance as compared with the state-of-the-art RRIQA models for both natural and texture images. In the context of texture, the effect of texture granularity on the quality of synthesized textures is studied. Moreover, two RR objective visual quality assessment methods that quantify the perceived quality of synthesized textures are proposed. Performance evaluations on two synthesized texture databases demonstrate that the proposed RR metrics outperforms full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in predicting the perceived visual quality of the synthesized textures. Last but not least, an effective approach to address the spatially-varying blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings is proposed. The evaluations of the proposed approach on a diverse sets of blurry images with different blur types, levels, and content demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Comparison Of Sparse Coding And Jpeg Coding Schemes For Blurred Retinal Images.

    Get PDF
    Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results

    Méthodes hybrides pour la compression d'image

    Get PDF
    Abstract : The storage and transmission of images is the basis of digital electronic communication. In order to communicate a maximum amount of information in a given period of time, one needs to look for efficient ways to represent the information communicated. Designing optimal representations is the subject of data compression. In this work, the compression methods consist of two steps in general, which are encoding and decoding. During encoding, one expresses the image by less data than the original and stores the data information; during decoding, one decodes the compressed data to show the decompressed image. In Chapter 1, we review some basic compression methods which are important in understanding the concepts of encoding and information theory as tools to build compression models and measure their efficiency. Further on, we focus on transform methods for compression, particularly we discuss in details Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). We also analyse the hybrid method which combines DCT and DWT together to compress image data. For the sake of comparison, we discuss another total different method which is fractal image compression that compresses image data by taking advantage of self-similarity of images. We propose the hybrid method of fractal image compression and DCT based on their characteristic. Several experimental results are provided to show the outcome of the comparison between the discussed methods. This allows us to conclude that the hybrid method performs more efficiently and offers a relatively good quality of compressed image than some particular methods, but also there is some improvement can be made in the future.Le stockage et la transmission d'images sont à la base de la communication électronique numérique. Afin de communiquer un maximum d'informations dans un laps de temps donné, il faut rechercher des moyens efficaces de représenter les informations communiquées. L'objectif de base de la compression de données est la conception d'algorithmes qui permettent des représentations optimales des données. Dans ce travail, les méthodes de compression consistent en deux étapes en général, qui sont l'encodage et le décodage. Lors du codage, on exprime l'image par moins de données que l'image originale et stocke les informations obtenues; lors du décodage, on décode les données compressées pour montrer l'image décompressée. Dans le chapitre 1, nous passons en revue quelques méthodes de compression de base qui sont importantes pour comprendre les concepts d'encodage et de théorie de l'information en tant qu'outils pour construire des modèles de compression et mesurer leur efficacité. Plus loin, nous nous concentrons sur les méthodes de transformation pour la compression, en particulier nous discutons en détail des méthodes de transformée en cosinus discrète (DCT) et Transformée en ondelettes discrète (DWT). Nous analysons également la méthode hybride qui combine DCT et DWT pour compresser les données d'image. À des fins de comparaison, nous discutons d'une autre méthode totalement différente qui est la compression d'image fractale qui comprime les données d'image en tirant partie de l'autosimilarité des images. Nous proposons la méthode hybride de compression d'image fractale et DCT en fonction de leurs caractéristiques. Plusieurs résultats expérimentaux sont fournis pour montrer le résultat de la comparaison entre les méthodes discutées. Cela nous permet de conclure que la méthode hybride fonctionne plus efficacement et offre une qualité d'image compressée relativement meilleure que certaines méthodes, mais il y a aussi des améliorations qui peuvent être apportées à l'avenir

    Wavelet-based image compression for mobile applications.

    Get PDF
    The transmission of digital colour images is rapidly becoming popular on mobile telephones, Personal Digital Assistant (PDA) technology and other wireless based image services. However, transmitting digital colour images via mobile devices is badly affected by low air bandwidth. Advances in communications Channels (example 3G communication network) go some way to addressing this problem but the rapid increase in traffic and demand for ever better quality images, means that effective data compression techniques are essential for transmitting and storing digital images. The main objective of this thesis is to offer a novel image compression technique that can help to overcome the bandwidth problem. This thesis has investigated and implemented three different wavelet-based compression schemes with a focus on a suitable compression method for mobile applications. The first described algorithm is a dual wavelet compression algorithm, which is a modified conventional wavelet compression method. The algorithm uses different wavelet filters to decompose the luminance and chrominance components separately. In addition, different levels of decomposition can also be applied to each component separately. The second algorithm is segmented wavelet-based, which segments an image into its smooth and nonsmooth parts. Different wavelet filters are then applied to the segmented parts of the image. Finally, the third algorithm is the hybrid wavelet-based compression System (HWCS), where the subject of interest is cropped and is then compressed using a wavelet-based method. The details of the background are reduced by averaging it and sending the background separately from the compressed subject of interest. The final image is reconstructed by replacing the averaged background image pixels with the compressed cropped image. For each algorithm the experimental results presented in this thesis clearly demonstrated that encoder output can be effectively reduced while maintaining an acceptable image visual quality particularly when compared to a conventional wavelet-based compression scheme
    corecore