159 research outputs found

    Digital image compression

    Get PDF

    Construction and evaluation of trellis-coded quantizers for memoryless sources

    Full text link

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Studies on the Asymptotic Behavior of Parameters in Optimal Scalar Quantization.

    Full text link
    The goal in digital device design is to achieve high performance at low cost, and to pursue this goal, all components of the device must be designed accordingly. A principal component common in digital devices is the quantizer, and frequently used is the minimum mean-squared error (MSE) or emph{optimal}, fixed-rate scalar quantizer. In this thesis, we focus on aids to the design of such quantizers. For an exponential source with variance sigma2sigma^2, we estimate the largest finite quantization threshold by providing upper and lower bounds which are functions of the number of quantization levels NN. The upper bound is 3sigmalogN3sigmalog N, Ngeq1Ngeq1, and the lower bound is 3sigmalogN+oNleft(1right)sigmaβˆ’1.46004,sigma3sigmalog N + o_Nleft(1right)sigma-1.46004,sigma, N>9N>9. Using these bounds, we derive an upper bound to the convergence rate of N2Dleft(Nright)N^2Dleft(Nright) to the Panter-Dite constant, where Dleft(Nright)Dleft(Nright) is the least MSE of any NN-level scalar quantizer. Furthermore, we present two, very simple, non-iterative and non-recursive suboptimal quantizer design methods for exponential sources that produce quantizers with good MSE performance. For an improved understanding of the half steps and quantization thresholds in optimal quantizers as functions of NN, we use as inspiration the result by Nitadori~cite{Nitadori1965} where, exploiting a key side effect of the source's memoryless property, he derived an infinite sequence such that for any NN, the kkth term of the sequence is equal to the kkth half step (counting from the right) of the optimal NN-level quantizer designed for a unit variance exponential source. In our work, using an asymptotic version of this key side effect which holds for general exponential (GE) sources parameterized by an exponential power pp and a utilizing a method of our own devising, we show that for such a source, the kkth half step of an optimal NN-level quantizer multiplied by the (pβˆ’1p-1)st power of the kkth threshold approaches the kkth term of the Nitadori sequence as NN grows to infinity. Thus, the Nitadori sequence asymptotically characterizes the cells of MMSE quantizers for GE-sources, as well as exponential.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/76011/1/vbyee_1.pd

    A comparative study of image compress schemes

    Get PDF
    Image compression is an important and active area of signal processing. All popular image compression techniques consist of three stages: Image transformation, quantization (lossy compression only), and lossless coding (of quantized transform coefficients). This thesis deals with a comparative study of several lossy image compression techniques. First, it reviews the well-known techniques of each stage. Starting with the first stage, the techniques of orthogonal block transformation and subband transform are described in detail. Then the quantization stage is described, followed by a brief review of the techniques for the third stage, lossless coding. Then these different image compression techniques are simulated and their rate-distortion performance are compared with each other. The results show that two-band multiplierless PR-QMF bank based subband image codec outperforms other filter banks considered in this thesis. It is also shown that uniform quantizers with a dead-zone perform best. Also, the multiplierless PR-QMF bank outperforms the DCT based on uniform quantization, but underperforms the DCT based on uniform quantization with a dead-zone
    • …
    corecore