323 research outputs found

    Histogram packing, total variation, and lossless image compression

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Toulouse, France, 200

    Image data hiding

    Get PDF
    Image data hiding represents a class of processes used to embed data into cover images. Robustness is one of the basic requirements for image data hiding. In the first part of this dissertation, 2D and 3D interleaving techniques associated with error-correction-code (ECC) are proposed to significantly improve the robustness of hidden data against burst errors. In most cases, the cover image cannot be inverted back to the original image after the hidden data are retrieved. In this dissertation, one novel reversible (lossless) data hiding technique is then introduced. This technique is based on the histogram modification, which can embed a large amount of data while keeping a very high visual quality for all images. The performance is hence better than most existing reversible data hiding algorithms. However, most of the existing lossless data hiding algorithms are fragile in the sense that the hidden data cannot be extracted correctly after compression or small alteration. In the last part of this dissertation, we then propose a novel robust lossless data hiding technique based on patchwork idea and spatial domain pixel modification. This technique does not generate annoying salt-pepper noise at all, which is unavoidable in the other existing robust lossless data hiding algorithm. This technique has been successfully applied to many commonly used images, thus demonstrating its generality

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Efficient Encoding of Wireless Capsule Endoscopy Images Using Direct Compression of Colour Filter Array Images

    Get PDF
    Since its invention in 2001, wireless capsule endoscopy (WCE) has played an important role in the endoscopic examination of the gastrointestinal tract. During this period, WCE has undergone tremendous advances in technology, making it the first-line modality for diseases from bleeding to cancer in the small-bowel. Current research efforts are focused on evolving WCE to include functionality such as drug delivery, biopsy, and active locomotion. For the integration of these functionalities into WCE, two critical prerequisites are the image quality enhancement and the power consumption reduction. An efficient image compression solution is required to retain the highest image quality while reducing the transmission power. The issue is more challenging due to the fact that image sensors in WCE capture images in Bayer Colour filter array (CFA) format. Therefore, standard compression engines provide inferior compression performance. The focus of this thesis is to design an optimized image compression pipeline to encode the capsule endoscopic (CE) image efficiently in CFA format. To this end, this thesis proposes two image compression schemes. First, a lossless image compression algorithm is proposed consisting of an optimum reversible colour transformation, a low complexity prediction model, a corner clipping mechanism and a single context adaptive Golomb-Rice entropy encoder. The derivation of colour transformation that provides the best performance for a given prediction model is considered as an optimization problem. The low complexity prediction model works in raster order fashion and requires no buffer memory. The application of colour transformation yields lower inter-colour correlation and allows the efficient independent encoding of the colour components. The second compression scheme in this thesis is a lossy compression algorithm with a integer discrete cosine transformation at its core. Using the statistics obtained from a large dataset of CE image, an optimum colour transformation is derived using the principal component analysis (PCA). The transformed coefficients are quantized using optimized quantization table, which was designed with a focus to discard medically irrelevant information. A fast demosaicking algorithm is developed to reconstruct the colour image from the lossy CFA image in the decoder. Extensive experiments and comparisons with state-of-the-art lossless image compression methods establish the superiority of the proposed compression methods as simple and efficient image compression algorithm. The lossless algorithm can transmit the image in a lossless manner within the available bandwidth. On the other hand, performance evaluation of lossy compression algorithm indicates that it can deliver high quality images at low transmission power and low computation costs

    Efficient Image Coding and Transmission in Deep Space Communication

    Full text link
    The usefulness of modern digital communication comes from ensuring the data from a source arrives to its destination quickly and correctly. To meet these demands, communication protocols employ data compression and error detection/correction to ensure compactness and accuracy of the data, especially for critical scientific data which requires the use of lossless compression. For example, in deep space communication, information received from satellites to ground stations on Earth come in huge volumes captured with high precision and resolution by space mission instruments, such as Hubble Space Telescope (HST). On-board implementation of communication protocols poses numerous constraints and demands on the high performance given the criticality of data and a high cost of a space mission, including data values. The objectives of this study are to determine which data compression techniques yields the a) minimum data volumes, b) most error resilience, and c) utilize the least amount and power of hardware resources. For this study, a Field Programmable Gate Array (FPGA) will serve as the main component for building the circuitry for each source coding technique. Furthermore, errors are induced based on studies of reported errors rates in deep space communication channels to test for error resilience. Finally, the calculation of resource utilization of the source encoder determines the power and computational usage. Based on the analysis of the error resilience and the characteristics of errors, the requirements to the channel coding are formulated

    Improving minimum rate predictors algorithm for compression of volumetric medical images

    Get PDF
    Medical imaging technologies are experiencing a growth in terms of usage and image resolution, namely in diagnostics systems that require a large set of images, like CT or MRI. Furthermore, legal restrictions impose that these scans must be archived for several years. These facts led to the increase of storage costs in medical image databases and institutions. Thus, a demand for more efficient compression tools, used for archiving and communication, is arising. Currently, the DICOM standard, that makes recommendations for medical communications and imaging compression, recommends lossless encoders such as JPEG, RLE, JPEG-LS and JPEG2000. However, none of these encoders include inter-slice prediction in their algorithms. This dissertation presents the research work on medical image compression, using the MRP encoder. MRP is one of the most efficient lossless image compression algorithm. Several processing techniques are proposed to adapt the input medical images to the encoder characteristics. Two of these techniques, namely changing the alignment of slices for compression and a pixel-wise difference predictor, increased the compression efficiency of MRP, by up to 27.9%. Inter-slice prediction support was also added to MRP, using uni and bi-directional techniques. Also, the pixel-wise difference predictor was added to the algorithm. Overall, the compression efficiency of MRP was improved by 46.1%. Thus, these techniques allow for compression ratio savings of 57.1%, compared to DICOM encoders, and 33.2%, compared to HEVC RExt Random Access. This makes MRP the most efficient of the encoders under study

    Correlation and image compression for limited-bandwidth CCD.

    Full text link

    Compression image sharing using DCT- Wavelet transform and coding by Blackely method

    Get PDF
    The increased use of computer and internet had been related to the wide use of multimedia information. The requirement forprotecting this information has risen dramatically. To prevent the confidential information from being tampered with, one needs toapply some cryptographic techniques. Most of cryptographic strategies have one similar weak point that is the information is centralized.To overcome this drawback the secret sharing was introduced. It’s a technique to distribute a secret among a group of members, suchthat every member owns a share of the secret; but only a particular combination of shares could reveal the secret. Individual sharesreveal nothing about the secret. The major challenge faces image secret sharing is the shadow size; that's the complete size of the lowestneeded of shares for revealing is greater than the original secret file. So the core of this work is to use different transform codingstrategies in order to get as much as possible the smallest share size. In this paper Compressive Sharing System for Images UsingTransform Coding and Blackely Method based on transform coding illustration are introduced. The introduced compressive secretsharing scheme using an appropriate transform (Discrete cosine transform and Wavelet) are applied to de-correlate the image samples,then feeding the output (i.e., compressed image data) to the diffusion scheme which is applied to remove any statistical redundancy orbits of important attribute that will exist within the compressed stream and in the last the (k, n) threshold secret sharing scheme, where nis the number of generated shares and k is the minimum needed shares for revealing. For making a certain high security level, eachproduced share is passed through stream ciphering depends on an individual encryption key belongs to the shareholder
    • …
    corecore