874 research outputs found

    Implementation issues in source coding

    Get PDF
    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    Layered Wyner-Ziv video coding for noisy channels

    Get PDF
    The growing popularity of video sensor networks and video celluar phones has generated the need for low-complexity and power-efficient multimedia systems that can handle multiple video input and output streams. While standard video coding techniques fail to satisfy these requirements, distributed source coding is a promising technique for ??uplink?? applications. Wyner-Ziv coding refers to lossy source coding with side information at the decoder. Based on recent theoretical result on successive Wyner-Ziv coding, we propose in this thesis a practical layered Wyner-Ziv video codec using the DCT, nested scalar quantizer, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information) for noiseless channel. The DCT is applied as an approximation to the conditional KLT, which makes the components of the transformed block conditionally independent given the side information. NSQ is a binning scheme that facilitates layered bit-plane coding of the bin indices while reducing the bit rate. LDPC code based Slepian-Wolf coding exploits the correlation between the quantized version of the source and the side information to achieve further compression. Different from previous works, an attractive feature of our proposed system is that video encoding is done only once but decoding allowed at many lower bit rates without quality loss. For Wyner-Ziv coding over discrete noisy channels, we present a Wyner-Ziv video codec using IRA codes for Slepian-Wolf coding based on the idea of two equivalent channels. For video streaming applications where the channel is packet based, we apply unequal error protection scheme to the embedded Wyner-Ziv coded video stream to find the optimal source-channel coding trade-off for a target transmission rate over packet erasure channel

    Analysis for Scalable Coding of Quality-Adjustable Sensor Data

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 신현식.Machine-generated data such as sensor data now comprise major portion of available information. This thesis addresses two important problems: storing of massive sensor data collection and efficient sensing. We first propose a quality-adjustable sensor data archiving, which compresses entire collection of sensor data efficiently without compromising key features. Considering the data aging aspect of sensor data, we make our archiving scheme capable of controlling data fidelity to exploit less frequent data access of user. This flexibility on quality adjustability leads to more efficient usage of storage space. In order to store data from various sensor types in cost-effective way, we study the optimal storage configuration strategy using analytical models that capture characteristics of our scheme. This strategy helps storing sensor data blocks with the optimal configurations that maximizes data fidelity of various sensor data under given storage space. Next, we consider efficient sensing schemes and propose a quality-adjustable sensing scheme. We adopt compressive sensing (CS) that is well suited for resource-limited sensors because of its low computational complexity. We enhance quality adjustability intrinsic to CS with quantization and especially temporal downsampling. Our sensing architecture provides more rate-distortion operating points than previous schemes, which enables sensors to adapt data quality in more efficient way considering overall performance. Moreover, the proposed temporal downsampling improves coding efficiency that is a drawback of CS. At the same time, the downsampling further reduces computational complexity of sensing devices, along with sparse random matrix. As a result, our quality-adjustable sensing can deliver gains to a wide variety of resource-constrained sensing techniques.Abstract i Contents iii List of Figures vi List of Tables x Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Spatio-Temporal Correlation in Sensor Data 3 1.3 Quality Adjustability of Sensor Data 7 1.4 Research Contributions 9 1.5 Thesis Organization 11 Chapter 2 Archiving of Sensor Data 12 2.1 Encoding Sensor Data Collection 12 2.1.1 Archiving Architecture 13 2.1.2 Data Conversion 16 2.2 Compression Ratio Comparison 20 2.3 Quality-Adjustable Archiving Model 25 2.3.1 Data Fidelity Model: Rate 25 2.3.2 Data Fidelity Model: Distortion 28 2.4 QP-Rate-Distortion Model 36 2.5 Optimal Rate Allocation 40 2.5.1 Rate Allocation Strategy 40 2.5.2 Optimal Storage Configuration 41 2.5.3 Experimental Results 44 Chapter 3 Scalable Management of Storage 46 3.1 Scalable Quality Management 46 3.1.1 Archiving Architecture 47 3.1.2 Compression Ratio Comparison 49 3.2 Enhancing Quality Adjustability 51 3.2.1 Data Fidelity Model: Rate 52 3.2.2 Data Fidelity Model: Distortion 55 3.3 Optimal Rate Allocation 59 3.3.1 Rate Allocation Strategy 60 3.3.2 Optimal Storage Configuration 63 3.3.3 Experimental Results 71 Chapter 4 Quality-Adjustable Sensing 73 4.1 Compressive Sensing 73 4.1.1 Compressive Sensing Problem 74 4.1.2 General Signal Recovery 76 4.1.3 Noisy Signal Recovery 76 4.2 Quality Adjustability in Sensing Environment 77 4.2.1 Quantization and Temporal Downsampling 79 4.2.2 Optimization with Error Model 85 4.3 Low-Complexity Sensing 88 4.3.1 Sparse Random Matrix 89 4.3.2 Resource Savings 92 Chapter 5 Conclusions 96 5.1 Summary 96 5.2 Future Research Directions 98 Bibliography 100 Abstract in Korean 109Docto

    The Three-Terminal Interactive Lossy Source Coding Problem

    Full text link
    The three-node multiterminal lossy source coding problem is investigated. We derive an inner bound to the general rate-distortion region of this problem which is a natural extension of the seminal work by Kaspi'85 on the interactive two-terminal source coding problem. It is shown that this (rather involved) inner bound contains several rate-distortion regions of some relevant source coding settings. In this way, besides the non-trivial extension of the interactive two terminal problem, our results can be seen as a generalization and hence unification of several previous works in the field. Specializing to particular cases we obtain novel rate-distortion regions for several lossy source coding problems. We finish by describing some of the open problems and challenges. However, the general three-node multiterminal lossy source coding problem seems to offer a formidable mathematical complexity.Comment: New version with changes suggested by reviewers.Revised and resubmitted to IEEE Transactions on Information Theory. 92 pages, 11 figures, 1 tabl
    corecore