87 research outputs found

    Common Information and Decentralized Inference with Dependent Observations

    Get PDF
    Wyner\u27s common information was originally defined for a pair of dependent discrete random variables. This thesis generalizes its definition in two directions: the number of dependent variables can be arbitrary, so are the alphabets of those random variables. New properties are determined for the generalized Wyner\u27s common information of multiple dependent variables. More importantly, a lossy source coding interpretation of Wyner\u27s common information is developed using the Gray-Wyner network. It is established that the common information equals to the smallest common message rate when the total rate is arbitrarily close to the rate distortion function with joint decoding if the distortions are within some distortion region. The application of Wyner\u27s common information to inference problems is also explored in the thesis. A central question is under what conditions does Wyner\u27s common information capture the entire information about the inference object. Under a simple Bayesian model, it is established that for infinitely exchangeable random variables that the common information is asymptotically equal to the information of the inference object. For finite exchangeable random variables, connection between common information and inference performance metrics are also established. The problem of decentralized inference is generally intractable with conditional dependent observations. A promising approach for this problem is to utilize a hierarchical conditional independence model. Utilizing the hierarchical conditional independence model, we identify a more general condition under which the distributed detection problem becomes tractable, thereby broadening the classes of distributed detection problems with dependent observations that can be readily solved. We then develop the sufficiency principle for data reduction for decentralized inference. For parallel networks, the hierarchical conditional independence model is used to obtain conditions such that local sufficiency implies global sufficiency. For tandem networks, the notion of conditional sufficiency is introduced and the related theory and tools are developed. Connections between the sufficiency principle and distributed source coding problems are also explored. Furthermore, we examine the impact of quantization on decentralized data reduction. The conditions under which sufficiency based data reduction with quantization constraints is optimal are identified. They include the case when the data at decentralized nodes are conditionally independent as well as a class of problems with conditionally dependent observations that admit conditional independence structure through the hierarchical conditional independence model

    Oblivious data hiding : a practical approach

    Get PDF
    This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed: 1. Theoretical and practical aspects of embedder-detector design. 2. Performance evaluation, and analysis of performance vs. complexity tradeoffs. 3. Some application specific implementations. A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure. The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work

    Digital image compression

    Get PDF

    Characterization of Information Channels for Asymptotic Mean Stationarity and Stochastic Stability of Non-stationary/Unstable Linear Systems

    Full text link
    Stabilization of non-stationary linear systems over noisy communication channels is considered. Stochastically stable sources, and unstable but noise-free or bounded-noise systems have been extensively studied in information theory and control theory literature since 1970s, with a renewed interest in the past decade. There have also been studies on non-causal and causal coding of unstable/non-stationary linear Gaussian sources. In this paper, tight necessary and sufficient conditions for stochastic stabilizability of unstable (non-stationary) possibly multi-dimensional linear systems driven by Gaussian noise over discrete channels (possibly with memory and feedback) are presented. Stochastic stability notions include recurrence, asymptotic mean stationarity and sample path ergodicity, and the existence of finite second moments. Our constructive proof uses random-time state-dependent stochastic drift criteria for stabilization of Markov chains. For asymptotic mean stationarity (and thus sample path ergodicity), it is sufficient that the capacity of a channel is (strictly) greater than the sum of the logarithms of the unstable pole magnitudes for memoryless channels and a class of channels with memory. This condition is also necessary under a mild technical condition. Sufficient conditions for the existence of finite average second moments for such systems driven by unbounded noise are provided.Comment: To appear in IEEE Transactions on Information Theor

    Sparse Signal Processing and Statistical Inference for Internet of Things

    Get PDF
    Data originating from many devices within the Internet of Things (IoT) framework can be modeled as sparse signals. Efficient compression techniques of such data are essential to reduce the memory storage, bandwidth, and transmission power. In this thesis, I develop some theory and propose practical schemes for IoT applications to exploit the signal sparsity for efficient data acquisition and compression under the frameworks of compressed sensing (CS) and transform coding. In the context of CS, the restricted isometry constant of finite Gaussian measurement matrices is investigated, based on the exact distributions of the extreme eigenvalues of Wishart matrices. The analysis determines how aggressively the signal can be sub-sampled and recovered from a small number of linear measurements. The signal reconstruction is guaranteed, with a predefined probability, via various recovery algorithms. Moreover, the measurement matrix design for simultaneously acquiring multiple signals is considered. This problem is important for IoT networks, where a huge number of nodes are involved. In this scenario, the presented analytical methods provide limits on the compression of joint sparse sources by analyzing the weak restricted isometry constant of Gaussian measurement matrices. Regarding transform coding, two efficient source encoders for noisy sparse sources are proposed, based on channel coding theory. The analytical performance is derived in terms of the operational rate-distortion and energy-distortion. Furthermore, a case study for the compression of real signals from a wireless sensor network using the proposed encoders is considered. These techniques can reduce the power consumption and increase the lifetime of IoT networks. Finally, a frame synchronization mechanism has been designed to achieve reliable radio links for IoT devices, where optimal and suboptimal metrics for noncoherent frame synchronization are derived. The proposed tests outperform the commonly used correlation detector, leading to accurate data extraction and reduced power consumption

    Compressive sensor networks : fundamental limits and algorithms

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 85-92).Compressed sensing is a non-adaptive compression method that takes advantage of natural sparsity at the input and is fast gaining relevance to both researchers and engineers for its universality and applicability. First developed by Candis et al., the subject has seen a surge of high-quality results both in its theory and applications. This thesis extends compressed sensing ideas to sensor networks and other bandwidth-constrained communication systems. In particular, we explore the limits of performance of compressive sensor networks in relation to fundamental operations such as quantization and parameter estimation. Since compressed sensing is originally formulated as a real-valued problem, quantization of the measurements is a very natural extension. Although several researchers have proposed modified reconstruction methods that mitigate quantization noise for a fixed quantizer, the optimal design of such quantizers is still unknown. We propose to find the optimal quantizer in terms of minimizing quantization error by using recent results in functional scalar quantization. The best quantizer in this case is not the optimal design for the measurements themselves but rather is reweighted by a factor we call the sensitivity. Numerical results demonstrate a constant-factor improvement in the fixed-rate case. Parameter estimation is an important goal of many sensing systems since users often care about some function of the data rather than the data itself.(cont.) Thus, it is of interest to see how efficiently nodes using compressed sensing can estimate a parameter, and if the measurements scalings can be less restrictive than the bounds in the literature. We explore this problem for time difference and angle of arrival, two common methods for source geolocation. We first derive Cramer-Rao lower bounds for both parameters and show that a practical block-OMP estimator can be relatively efficient for signal reconstruction. However, there is a large gap between theory and practice for time difference or angle of arrival estimation, which demonstrates the CRB to be an optimistic lower bound for nonlinear estimation. We also find scaling laws 'for time difference estimation in the discrete case. This is strongly related to partial support recovery, and we derive some new sufficient conditions that show a very simple reconstruction algorithm can achieve substantially better scaling than full support recovery suggests is possible.by John Zheng Sun.S.M

    Successive structuring of source coding algorithms for data fusion, buffering, and distribution in networks

    Get PDF
    Supervised by Gregory W. Wornell.Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 159-165).(cont.) We also explore the interactions between source coding and queue management in problems of buffering and distributing distortion-tolerant data. We formulate a general queuing model relevant to numerous communication scenarios, and develop a bound on the performance of any algorithm. We design an adaptive buffer-control algorithm for use in dynamic environments and under finite memory limitations; its performance closely approximates the bound. Our design uses multiresolution source codes that exploit the data's distortion-tolerance in minimizing end-to-end distortion. Compared to traditional approaches, the performance gains of the adaptive algorithm are significant - improving distortion, delay, and overall system robustness.by Stark Christiaan Draper
    corecore