55 research outputs found

    JPEG Image Encryption Using Combined Reversed And Normal Direction-Distorted Dc Permutation With Key Scheduling Algorithm-Based Permutation

    Get PDF
    This thesis work studied on digital image encryption algorithms performed towards JPEG images. With image encryption algorithms, JPEG images can be securely scrambled or encrypted prior to distribution. The intended recipient will be given a decryption key in which only with this key the receiver can received and decrypt the media for viewing. The proposed approach uses a frequency domain combinational framework of coefficients scrambling with Key Scheduling Algorithm based (KSA-based) permutation. This novel algorithm applies coefficients scrambling using Combined-Reverse-and-Normal-Direction (CRND) scanning together with Distorted DC permutation (DDP). This encryption algorithm involved the manipulation of JPEG zigzag scanning table according to 10 different scanning tables which was derived by reversing the existing zigzag scanning directions. With the same compression properties, this encryption algorithm was shown to be able to produce average file size smaller than baseline JPEG and other encryption. It was also shown that the average decoding speed for this technique outperform most of other existing techniques and the same time able to maintain image quality (PSNR) as other techniques. It terms of security, with the combination of Distorted DC permutation (DDP), it was considered to be having medium security based on some basic attack analysis that was carried out. It is also shown that this technique is fully format compliance as most of other techniques do. Based on the simple nature of CRND, this technique is easy to be implemented on existing system and thus should be able reduce the cost of implementing a new encryption system

    EFFICIENT IMAGE COMPRESSION AND DECOMPRESSION ALGORITHMS FOR OCR SYSTEMS

    Get PDF
    This paper presents an efficient new image compression and decompression methods for document images, intended for usage in the pre-processing stage of an OCR system designed for needs of the “Nikola Tesla Museum” in Belgrade. Proposed image compression methods exploit the Run-Length Encoding (RLE) algorithm and an algorithm based on document character contour extraction, while an iterative scanline fill algorithm is used for image decompression. Image compression and decompression methods are compared with JBIG2 and JPEG2000 image compression standards. Segmentation accuracy results for ground-truth documents are obtained in order to evaluate the proposed methods. Results show that the proposed methods outperform JBIG2 compression regarding the time complexity, providing up to 25 times lower processing time at the expense of worse compression ratio results, as well as JPEG2000 image compression standard, providing up to 4-fold improvement in compression ratio. Finally, time complexity results show that the presented methods are sufficiently fast for a real time character segmentation system

    Astronomical image manipulation in the transform domain

    Full text link
    It is well known that images are usually stored and transmitted in the compressed form to save memory space and I/O bandwidth. Among many image compression schemes, transform coding is a widely used coding method. Traditionally, processing a compressed image requires decompression first. Following manipulations, the processed image is compressed again for storage. To reduce the computational complexity and processing time, manipulating images in the semi-compressed or transform domain is an efficient solution; Many astronomical images are compressed and stored by JPEG and HCOM-PRESS, which are based on the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT), respectively. In this thesis, a suite of image processing algorithms in the transform domain, DCT and DWT, is developed. In particular, new methods for edge enhancement and minimum (MIN)/maximum (MAX) gray scale intensity estimation in the DCT domain are proposed. Algebraic operations and image interpolation in the DWT domain are addressed. The superiority of new algorithms over the conventional ones is demonstrated by comparing the time complexities and qualities of the processed image in the transform domain to those in the spatial domain

    Scanline calculation of radial influence for image processing

    Full text link
    Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified

    Digital rights management (DRM) - watermark encoding scheme for JPEG images

    Get PDF
    The aim of this dissertation is to develop a new algorithm to embed a watermark in JPEG compressed images, using encoding methods. This encompasses the embedding of proprietary information, such as identity and authentication bitstrings, into the compressed material. This watermark encoding scheme involves combining entropy coding with homophonic coding, in order to embed a watermark in a JPEG image. Arithmetic coding was used as the entropy encoder for this scheme. It is often desired to obtain a robust digital watermarking method that does not distort the digital image, even if this implies that the image is slightly expanded in size before final compression. In this dissertation an algorithm that combines homophonic and arithmetic coding for JPEG images was developed and implemented in software. A detailed analysis of this algorithm is given and the compression (in number of bits) obtained when using the newly developed algorithm (homophonic and arithmetic coding). This research shows that homophonic coding can be used to embed a watermark in a JPEG image by using the watermark information for the selection of the homophones. The proposed algorithm can thus be viewed as a ‘key-less’ encryption technique, where an external bitstring is used as a ‘key’ and is embedded intrinsically into the message stream. The algorithm has achieved to create JPEG images with minimal distortion, with Peak Signal to Noise Ratios (PSNR) of above 35dB. The resulting increase in the entropy of the file is within the expected 2 bits per symbol. This research endeavor consequently provides a unique watermarking technique for images compressed using the JPEG standard.Dissertation (MEng)--University of Pretoria, 2008.Electrical, Electronic and Computer Engineeringunrestricte

    Critical Data Compression

    Full text link
    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do many black and white photographs and most color photographs at sufficiently high quality levels. Examples applying the method to audio and video coding are also demonstrated. Since two-part codes are efficient for both periodic and chaotic data, concatenations of roughly similar objects may be encoded efficiently, which leads to improved inference. Applications to artificial intelligence are demonstrated, showing that signals using an economical lossless code have a critical level of redundancy which leads to better description-based inference than signals which encode either insufficient data or too much detail.Comment: 99 pages, 31 figure

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    1994 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center
    corecore