318 research outputs found

    Performance evaluation of lossless medical and natural continuous tone image compression algorithms

    Full text link
    tone image compression algorithm

    Secure and Privacy-preserving Data Sharing in the Cloud based on Lossless Image Coding

    Get PDF
    Abstract Image and video processing in the encrypted domain has recently emerged as a promising research area to tackle privacy-related data processing issues. In particular, reversible data hiding in the encrypted domain has been suggested as a solution to store and manage digital images securely in the cloud while preserving their confidentiality. However, although efficiency has been claimed with reversible data hiding techniques in encrypted images (RDHEI), reported results show that the cloud service provider cannot add more than 1 bit per pixel (bpp) of additional data to manage stored images. This paper highlights the weakness of RDHEI as a suggested approach for secure and privacy-preserving cloud computing. In particular, we propose a new, simple, and efficient approach that offers the same level of data security and confidentiality in the cloud without the process of reversible data hiding. The proposed idea is to compress the image via a lossless image coder in order to create space before encryption. This space is then filled with a randomly generated sequence and combined with an encrypted version of the compressed bit stream to form a full resolution encrypted image in the pixel domain. The cloud service provider uses the created room in the encrypted image to add additional data and produces an encrypted image containing additional data in a similar fashion. Assessed with the lossless Embedded Block Coding with Optimized Truncation (EBCOT) algorithm on natural images, the proposed scheme has been shown to exceed the capacity of 3 bpp of additional data while maintaining data security and confidentiality

    Image representation and compression via sparse solutions of systems of linear equations

    Get PDF
    We are interested in finding sparse solutions to systems of linear equations mathbfAmathbfx=mathbfbmathbf{A}mathbf{x} = mathbf{b}, where mathbfAmathbf{A} is underdetermined and fully-ranked. In this thesis we examine an implementation of the {em orthogonal matching pursuit} (OMP) algorithm, an algorithm to find sparse solutions to equations like the one described above, and present a logic for its validation and corresponding validation protocol results. The implementation presented in this work improves on the performance reported in previously published work that used software from SparseLab. We also use and test OMP in the study of the compression properties of mathbfAmathbf{A} in the context of image processing. We follow the common technique of image blocking used in the JPEG and JPEG 2000 standards. We make a small modification in the stopping criteria of OMP that results in better compression ratio vs image quality as measured by the structural similarity (SSIM) and mean structural similarity (MSSIM) indices which capture perceptual image quality. This results in slightly better compression than when using the more common peak signal to noise ratio (PSNR). We study various matrices whose column vectors come from the concatenation of waveforms based on the discrete cosine transform (DCT), and the Haar wavelet. We try multiple linearization algorithms and characterize their performance with respect to compression. An introduction and brief historical review on the topics of information theory, quantization and coding, and the theory of rate-distortion leads us to compute the distortion DD properties of the image compression and representation approach presented in this work. A choice for a lossless encoder gammagamma is left open for future work in order to obtain the complete characterization of the rate-distortion properties of the quantization/coding scheme proposed here. However, the analysis of natural image statistics is identified as a good design guideline for the eventual choice of gammagamma. The lossless encoder gammagamma is to be understood under the terms of a quantizer (alpha,gamma,beta)(alpha, gamma, beta) as introduced by Gray and Neuhoff

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    • …
    corecore