63 research outputs found

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Cumulant Generating Function of Codeword Lengths in Variable-Length Lossy Compression Allowing Positive Excess Distortion Probability

    Full text link
    This paper considers the problem of variable-length lossy source coding. The performance criteria are the excess distortion probability and the cumulant generating function of codeword lengths. We derive a non-asymptotic fundamental limit of the cumulant generating function of codeword lengths allowing positive excess distortion probability. It is shown that the achievability and converse bounds are characterized by the R\'enyi entropy-based quantity. In the proof of the achievability result, the explicit code construction is provided. Further, we investigate an asymptotic single-letter characterization of the fundamental limit for a stationary memoryless source.Comment: arXiv admin note: text overlap with arXiv:1701.0180

    Anytime information theory

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 171-175).We study the reliable communication of delay-sensitive bit streams through noisy channels. To bring the issues into sharp focus, we will focus on the specific problem of communicating the values of an unstable real-valued discrete-time Markov random process through a finite capacity noisy channel so as to have finite average squared error from end-to-end. On the source side, we give a coding theorem for such unstable processes that shows that we can achieve the rate-distortion bound even in the infinite horizon case if we are willing to tolerate bounded delays in encoding and decoding. On the channel side, we define a new parametric notion of capacity called anytime capacity that corresponds to a sense of reliable transmission that is stronger than the traditional Shannon capacity sense but is less demanding than the sense underlying zero-error capacity. We show that anytime capacity exists for memoryless channels without feedback and is connected to standard random coding error exponents. The main result of the thesis is a new source/channel separation theorem that encompasses unstable processes and establishes that the stronger notion of anytime capacity is required to be able to deal with delay-sensitive bit streams. This theorem is then applied in the control systems context to show that anytime capacity is also required to evaluate channels if we intend to use them as part of a feedback link from sensing to actuation. Finally, the theorem is used to shed light on the concept of "quality of service requirements" by examining a toy mathematical example for which we prove the absolute necessity of differentiated service without appealing to human preferences.by Anant Sahai.Ph.D

    Successive structuring of source coding algorithms for data fusion, buffering, and distribution in networks

    Get PDF
    Supervised by Gregory W. Wornell.Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 159-165).(cont.) We also explore the interactions between source coding and queue management in problems of buffering and distributing distortion-tolerant data. We formulate a general queuing model relevant to numerous communication scenarios, and develop a bound on the performance of any algorithm. We design an adaptive buffer-control algorithm for use in dynamic environments and under finite memory limitations; its performance closely approximates the bound. Our design uses multiresolution source codes that exploit the data's distortion-tolerance in minimizing end-to-end distortion. Compared to traditional approaches, the performance gains of the adaptive algorithm are significant - improving distortion, delay, and overall system robustness.by Stark Christiaan Draper

    A FILTER-FORCING TURBULENCE MODEL FOR LARGE EDDY SIMULATION INCORPORATING THE COMPRESSIBLE POOR MAN\u27S NAVIER--STOKES EQUATIONS

    Get PDF
    A new approach to large-eddy simulation (LES) based on the use of explicit spatial filtering combined with backscatter forcing is presented. The forcing uses a discrete dynamical system (DDS) called the compressible ``poor man\u27s\u27\u27 Navier--Stokes (CPMNS) equations. This DDS is derived from the governing equations and is shown to exhibit good spectral and dynamical properties for use in a turbulence model. An overview and critique of existing turbulence theory and turbulence models is given. A comprehensive theoretical case is presented arguing that traditional LES equations contain unresolved scales in terms generally thought to be resolved, and that this can only be solved with explicit filtering. The CPMNS equations are then incorporated into a simple forcing in the OVERFLOW compressible flow code, and tests are done on homogeneous, isotropic, decaying turbulence, a Mach 3 compression ramp, and a Mach 0.8 open cavity. The numerical results validate the general filter-forcing approach, although they also reveal inadequacies in OVERFLOW and that the current approach is likely too simple to be universally applicable. Two new proposals for constructing better forcing models are presented at the end of the work

    Comparing hard and overlapping clusterings

    Get PDF
    Similarity measures for comparing clusterings is an important component, e.g., of evaluating clustering algorithms, for consensus clustering, and for clustering stability assessment. These measures have been studied for over 40 years in the domain of exclusive hard clusterings (exhaustive and mutually exclusive object sets). In the past years, the literature has proposed measures to handle more general clusterings (e.g., fuzzy/probabilistic clusterings). This paper provides an overview of these new measures and discusses their drawbacks. We ultimately develop a corrected-for-chance measure (13AGRI) capable of comparing exclusive hard, fuzzy/probabilistic, non-exclusive hard, and possibilistic clusterings. We prove that 13AGRI and the adjusted Rand index (ARI, by Hubert and Arabie) are equivalent in the exclusive hard domain. The reported experiments show that only 13AGRI could provide both a fine-grained evaluation across clusterings with different numbers of clusters and a constant evaluation between random clusterings, showing all the four desirable properties considered here. We identified a high correlation between 13AGRI applied to fuzzy clusterings and ARI applied to hard exclusive clusterings over 14 real data sets from the UCI repository, which corroborates the validity of 13AGRI fuzzy clustering evaluation. 13AGRI also showed good results as a clustering stability statistic for solutions produced by the expectation maximization algorithm for Gaussian mixture

    Rapid Digital Architecture Design of Computationally Complex Algorithms

    Get PDF
    Traditional digital design techniques hardly keep up with the rising abundance of programmable circuitry found on recent Field-Programmable Gate Arrays. Therefore, the novel Rapid Data Type-Agnostic Digital Design Methodology (RDAM) elevates the design perspective of digital design engineers away from the register-transfer level to the algorithmic level. It is founded on the capabilities of High-Level Synthesis tools. By consequently working with data type-agnostic source codes, the RDAM brings significant simplifications to the fixed-point conversion of algorithms and the design of complex-valued architectures. Signal processing applications from the field of Compressed Sensing illustrate the efficacy of the RDAM in the context of multi-user wireless communications. For instance, a complex-valued digital architecture of Orthogonal Matching Pursuit with rank-1 updating has successfully been implemented and tested

    High Performance Data Acquisition and Analysis Routines for the Nab Experiment

    Get PDF
    Probes of the Standard Model of particle physics are pushing further and further into the so-called “precision frontier”. In order to reach the precision goals of these experiments, a combination of elegant experimental design and robust data acquisition and analysis is required. Two experiments that embody this philosophy are the Nab and Calcium-45 experiments. These experiments are probing the understanding of the weak interaction by examining the beta decay of the free neutron and Calcium-45 respectively. They both aim to measure correlation parameters in the neutron beta decay alphabet, a and b. The parameter a, the electron-neutrino correlation coefficient, is sensitive to λ, the ratio of the axial-vector and vector coupling strengths in the decay of the free neutron. This parameter λ, in tandem with a precision measurement of the neutron lifetime τ , provides a measurement of the matrix element Vud from the CKM quark mixing matrix. The CKM matrix, as a rotation matrix, must be unitary. Probes of Vud and Vus in recent years have revealed tension in this unitarity at the 2.2σ level. The measurement of a via decay of free cold neutrons serves as an additional method of extraction for Vud that is sensitive to a different set of systematic effects and as such is an excellent probe into the source of the deviation from unitarity. The parameter b, the Fierz interference term, appears as a distortion in the mea- sured electron energy spectra from beta decay. This parameter, if non-zero, would indicate the existence of Scalar and/or Tensor couplings in the Weak interaction which according to the Standard Model is purely Vector minus Axial-Vector. This is therefore a search for physics beyond the standard model, BSM, physics search. The Nab and Calcium-45 experiments probe these parameters with a combination of elegant experimental design and brute force collection and analysis of large amounts of digitized detector data. These datasets, particularly in the case of the Nab experiment, are anticipated to span multiple petabytes of data and will require high performance online analysis and precision offline analysis routines in order to reach the experimental goals. Of particular note are the requirements for better than 3 keV energy resolution and an understanding of the uncertainty in the mean timing bias for the detected particles within 300 ps. Presented in this dissertation is an overview of the experiments and their design, a description of the data acquisition systems and analysis routines that have been developed to support the experiments, and a discussion of the data analysis performed for the Calcium-45 experiment
    corecore