4,300 research outputs found

    Deep Networks for Compressed Image Sensing

    Full text link
    The compressed sensing (CS) theory has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. Several CS reconstruction models have been recently proposed and obtained superior performance. However, there still exist two important challenges within the CS theory. The first one is how to design a sampling mechanism to achieve an optimal sampling efficiency, and the second one is how to perform the reconstruction to get the highest quality to achieve an optimal signal recovery. In this paper, we try to deal with these two problems with a deep network. First of all, we train a sampling matrix via the network training instead of using a traditional manually designed one, which is much appropriate for our deep network based reconstruct process. Then, we propose a deep network to recover the image, which imitates traditional compressed sensing reconstruction processes. Experimental results demonstrate that our deep networks based CS reconstruction method offers a very significant quality improvement compared against state of the art ones.Comment: This paper has been accepted by the IEEE International Conference on Multimedia and Expo (ICME) 201

    Extraction of Projection Profile, Run-Histogram and Entropy Features Straight from Run-Length Compressed Text-Documents

    Full text link
    Document Image Analysis, like any Digital Image Analysis requires identification and extraction of proper features, which are generally extracted from uncompressed images, though in reality images are made available in compressed form for the reasons such as transmission and storage efficiency. However, this implies that the compressed image should be decompressed, which indents additional computing resources. This limitation induces the motivation to research in extracting features directly from the compressed image. In this research, we propose to extract essential features such as projection profile, run-histogram and entropy for text document analysis directly from run-length compressed text-documents. The experimentation illustrates that features are extracted directly from the compressed image without going through the stage of decompression, because of which the computing time is reduced. The feature values so extracted are exactly identical to those extracted from uncompressed images.Comment: Published by IEEE in Proceedings of ACPR-2013. arXiv admin note: text overlap with arXiv:1403.778

    Adaptive CSLBP compressed image hashing

    Get PDF
    Hashing is popular technique of image authentication to identify malicious attacks and it also allows appearance changes in an image in controlled way. Image hashing is quality summarization of images. Quality summarization implies extraction and representation of powerful low level features in compact form. Proposed adaptive CSLBP compressed hashing method uses modified CSLBP (Center Symmetric Local Binary Pattern) as a basic method for texture extraction and color weight factor derived from L*a*b* color space. Image hash is generated from image texture. Color weight factors are used adaptively in average and difference forms to enhance discrimination capability of hash. For smooth region, averaging of colours used while for non-smooth region, color differencing is used. Adaptive CSLBP histogram is a compressed form of CSLBP and its quality is improved by adaptive color weight factor. Experimental results are demonstrated with two benchmarks, normalized hamming distance and ROC characteristics. Proposed method successfully differentiate between content change and content persevering modifications for color images

    Digital processing of compressed image data

    Get PDF
    Certain image processing functions can be implemented more efficiently when the input data is in compressed form. Such an experimental system has been studied and simulated. The system consists of a one-dimensional Differential Pulse Code Modulation (DPCM) compressor, a one-dimensional non-recursive linear filter, and a one-dimensional DPCM decompressor, applied in that order. The implementation is more efficient because the filter is applied to the data in their compressed form, where fewer bits per pixel are required to represent them. A second, more conventional, system that contains the same functional elements but reverses the order of the filtration and decompression operations has also been implemented for comparison to the experimental one. The differences (errors) between the signals output from the two systems have been modeled and the models validated through experiments. It has been found that the systems can be made to yield equivalent results if certain parameters are constrained. These constraints do not put undue demands on system design nor do they substantially degrade system performance. Images produced by the two systems are presented and suggestions for additional work are discussed

    Reversible Data Hiding Scheme with High Embedding Capacity Using Semi-Indicator-Free Strategy

    Get PDF
    A novel reversible data-hiding scheme is proposed to embed secret data into a side-matched-vector-quantization- (SMVQ-) compressed image and achieve lossless reconstruction of a vector-quantization- (VQ-) compressed image. The rather random distributed histogram of a VQ-compressed image can be relocated to locations close to zero by SMVQ prediction. With this strategy, fewer bits can be utilized to encode SMVQ indices with very small values. Moreover, no indicator is required to encode these indices, which yields extrahiding space to hide secret data. Hence, high embedding capacity and low bit rate scenarios are deposited. More specifically, in terms of the embedding rate, the bit rate, and the embedding capacity, experimental results show that the performance of the proposed scheme is superior to those of the former data hiding schemes for VQ-based, VQ/SMVQ-based, and search-order-coding- (SOC-) based compressed images
    • …
    corecore