494 research outputs found

    More Efficient Algorithms and Analyses for Unequal Letter Cost Prefix-Free Coding

    Full text link
    There is a large literature devoted to the problem of finding an optimal (min-cost) prefix-free code with an unequal letter-cost encoding alphabet of size. While there is no known polynomial time algorithm for solving it optimally there are many good heuristics that all provide additive errors to optimal. The additive error in these algorithms usually depends linearly upon the largest encoding letter size. This paper was motivated by the problem of finding optimal codes when the encoding alphabet is infinite. Because the largest letter cost is infinite, the previous analyses could give infinite error bounds. We provide a new algorithm that works with infinite encoding alphabets. When restricted to the finite alphabet case, our algorithm often provides better error bounds than the best previous ones known.Comment: 29 pages;9 figures

    New Free Distance Bounds and Design Techniques for Joint Source-Channel Variable-Length Codes

    No full text
    International audienceThis paper proposes branch-and-prune algorithms for searching prefix-free joint source-channel codebooks with maximal free distance for given codeword lengths. For that purpose, it introduces improved techniques to bound the free distance of variable-length codes

    Indeterminate-length quantum coding

    Get PDF
    The quantum analogues of classical variable-length codes are indeterminate-length quantum codes, in which codewords may exist in superpositions of different lengths. This paper explores some of their properties. The length observable for such codes is governed by a quantum version of the Kraft-McMillan inequality. Indeterminate-length quantum codes also provide an alternate approach to quantum data compression.Comment: 32 page

    Data compression for full motion video transmission

    Get PDF
    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed

    Iterative Construction of Reversible Variable-Length Codes and Variable-Length Error-Correcting Codes

    Full text link

    Image Compression using Discrete Cosine Transform & Discrete Wavelet Transform

    Get PDF
    Image Compression addresses the problem of reducing the amount of data required to represent the digital image. Compression is achieved by the removal of one or more of three basic data redundancies: (1) Coding redundancy, which is present when less than optimal (i.e. the smallest length) code words are used; (2) Interpixel redundancy, which results from correlations between the pixels of an image & (3) psycho visual redundancy which is due to data that is ignored by the human visual system (i.e. visually nonessential information). Huffman codes contain the smallest possible number of code symbols (e.g., bits) per source symbol (e.g., grey level value) subject to the constraint that the source symbols are coded one at a time. So, Huffman coding when combined with technique of reducing the image redundancies using Discrete Cosine Transform (DCT) helps in compressing the image data to a very good extent. The Discrete Cosine Transform (DCT) is an example of transform coding. The current JPEG standard uses the DCT as its basis. The DC relocates the highest energies to the upper left corner of the image. The lesser energy or information is relocated into other areas. The DCT is fast. It can be quickly calculated and is best for images with smooth edges like photos with human subjects. The DCT coefficients are all real numbers unlike the Fourier Transform. The Inverse Discrete Cosine Transform (IDCT) can be used to retrieve the image from its transform representation. The Discrete wavelet transform (DWT) has gained widespread acceptance in signal processing and image compression. Because of their inherent multi-resolution nature, wavelet-coding schemes are especially suitable for applications where scalability and tolerable degradation are important. Recently the JPEG committee has released its new image coding standard, JPEG-2000, which has been based upon DWT

    Efficient Image Coding and Transmission in Deep Space Communication

    Full text link
    The usefulness of modern digital communication comes from ensuring the data from a source arrives to its destination quickly and correctly. To meet these demands, communication protocols employ data compression and error detection/correction to ensure compactness and accuracy of the data, especially for critical scientific data which requires the use of lossless compression. For example, in deep space communication, information received from satellites to ground stations on Earth come in huge volumes captured with high precision and resolution by space mission instruments, such as Hubble Space Telescope (HST). On-board implementation of communication protocols poses numerous constraints and demands on the high performance given the criticality of data and a high cost of a space mission, including data values. The objectives of this study are to determine which data compression techniques yields the a) minimum data volumes, b) most error resilience, and c) utilize the least amount and power of hardware resources. For this study, a Field Programmable Gate Array (FPGA) will serve as the main component for building the circuitry for each source coding technique. Furthermore, errors are induced based on studies of reported errors rates in deep space communication channels to test for error resilience. Finally, the calculation of resource utilization of the source encoder determines the power and computational usage. Based on the analysis of the error resilience and the characteristics of errors, the requirements to the channel coding are formulated

    Custom Lossless Compression and High-Quality Lossy Compression of White Blood Cell Microscopy Images for Display and Machine Learning Applications

    Get PDF
    This master's thesis investigates both custom lossless compression and high-quality lossy compression of microscopy images of white blood cells produced by CellaVision's blood analysis systems. A number of different compression strategies have been developed and evaluated, all of which are taking advantage of the specific color filter array used in the sensor in the cameras in the analysis systems. Lossless compression has been the main focus of this thesis. The lossless compression method, of those developed, that gave best result is based on a statistical autoregressive model. A model is constructed for each color channel with external information from the other color channels. The difference between the predictions from the statistical model and the original is further Huffman coded. The method achieves an average bit-rate of 3.0409 bits per pixel on the test set consisting of 604 images. The proposed lossy method is based on taking the difference between the image compressed with an ordinary lossy compression method, JPEG 2000, and the original image. The JPEG 2000 image is saved, as well as the differences at the foreground (i.e. locations with cells), in order to keep the cells identical to the cells in the original image, but allow loss of information for the, not so important, background. This method achieves a bit-rate of 2.4451 bits per pixel, with a peak signal-to-noise-ratio (PSNR) of 48.05 dB
    corecore