2,189 research outputs found

    One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks

    Full text link
    This paper formulates and studies a general distributed field reconstruction problem using a dense network of noisy one-bit randomized scalar quantizers in the presence of additive observation noise of unknown distribution. A constructive quantization, coding, and field reconstruction scheme is developed and an upper-bound to the associated mean squared error (MSE) at any point and any snapshot is derived in terms of the local spatio-temporal smoothness properties of the underlying field. It is shown that when the noise, sensor placement pattern, and the sensor schedule satisfy certain weak technical requirements, it is possible to drive the MSE to zero with increasing sensor density at points of field continuity while ensuring that the per-sensor bitrate and sensing-related network overhead rate simultaneously go to zero. The proposed scheme achieves the order-optimal MSE versus sensor density scaling behavior for the class of spatially constant spatio-temporal fields.Comment: Fixed typos, otherwise same as V2. 27 pages (in one column review format), 4 figures. Submitted to IEEE Transactions on Signal Processing. Current version is updated for journal submission: revised author list, modified formulation and framework. Previous version appeared in Proceedings of Allerton Conference On Communication, Control, and Computing 200

    Privacy-Aware Guessing Efficiency

    Full text link
    We investigate the problem of guessing a discrete random variable YY under a privacy constraint dictated by another correlated discrete random variable XX, where both guessing efficiency and privacy are assessed in terms of the probability of correct guessing. We define h(PXY,Ï”)h(P_{XY}, \epsilon) as the maximum probability of correctly guessing YY given an auxiliary random variable ZZ, where the maximization is taken over all PZ∣YP_{Z|Y} ensuring that the probability of correctly guessing XX given ZZ does not exceed Ï”\epsilon. We show that the map ϔ↊h(PXY,Ï”)\epsilon\mapsto h(P_{XY}, \epsilon) is strictly increasing, concave, and piecewise linear, which allows us to derive a closed form expression for h(PXY,Ï”)h(P_{XY}, \epsilon) when XX and YY are connected via a binary-input binary-output channel. For (Xn,Yn)(X^n, Y^n) being pairs of independent and identically distributed binary random vectors, we similarly define h‟n(PXnYn,Ï”)\underline{h}_n(P_{X^nY^n}, \epsilon) under the assumption that ZnZ^n is also a binary vector. Then we obtain a closed form expression for h‟n(PXnYn,Ï”)\underline{h}_n(P_{X^nY^n}, \epsilon) for sufficiently large, but nontrivial values of Ï”\epsilon.Comment: ISIT 201

    Universal lossless source coding with the Burrows Wheeler transform

    Get PDF
    The Burrows Wheeler transform (1994) is a reversible sequence transformation used in a variety of practical lossless source-coding algorithms. In each, the BWT is followed by a lossless source code that attempts to exploit the natural ordering of the BWT coefficients. BWT-based compression schemes are widely touted as low-complexity algorithms giving lossless coding rates better than those of the Ziv-Lempel codes (commonly known as LZ'77 and LZ'78) and almost as good as those achieved by prediction by partial matching (PPM) algorithms. To date, the coding performance claims have been made primarily on the basis of experimental results. This work gives a theoretical evaluation of BWT-based coding. The main results of this theoretical evaluation include: (1) statistical characterizations of the BWT output on both finite strings and sequences of length n → ∞, (2) a variety of very simple new techniques for BWT-based lossless source coding, and (3) proofs of the universality and bounds on the rates of convergence of both new and existing BWT-based codes for finite-memory and stationary ergodic sources. The end result is a theoretical justification and validation of the experimentally derived conclusions: BWT-based lossless source codes achieve universal lossless coding performance that converges to the optimal coding performance more quickly than the rate of convergence observed in Ziv-Lempel style codes and, for some BWT-based codes, within a constant factor of the optimal rate of convergence for finite-memory source

    Optimal Identical Binary Quantizer Design for Distributed Estimation

    Full text link
    We consider the design of identical one-bit probabilistic quantizers for distributed estimation in sensor networks. We assume the parameter-range to be finite and known and use the maximum Cram\'er-Rao Lower Bound (CRB) over the parameter-range as our performance metric. We restrict our theoretical analysis to the class of antisymmetric quantizers and determine a set of conditions for which the probabilistic quantizer function is greatly simplified. We identify a broad class of noise distributions, which includes Gaussian noise in the low-SNR regime, for which the often used threshold-quantizer is found to be minimax-optimal. Aided with theoretical results, we formulate an optimization problem to obtain the optimum minimax-CRB quantizer. For a wide range of noise distributions, we demonstrate the superior performance of the new quantizer - particularly in the moderate to high-SNR regime.Comment: 6 pages, 3 figures, This paper has been accepted for publication in IEEE Transactions in Signal Processin

    About adaptive coding on countable alphabets

    Get PDF
    This paper sheds light on universal coding with respect to classes of memoryless sources over a countable alphabet defined by an envelope function with finite and non-decreasing hazard rate. We prove that the auto-censuring AC code introduced by Bontemps (2011) is adaptive with respect to the collection of such classes. The analysis builds on the tight characterization of universal redundancy rate in terms of metric entropy % of small source classes by Opper and Haussler (1997) and on a careful analysis of the performance of the AC-coding algorithm. The latter relies on non-asymptotic bounds for maxima of samples from discrete distributions with finite and non-decreasing hazard rate

    Discrete Denoising with Shifts

    Full text link
    We introduce S-DUDE, a new algorithm for denoising DMC-corrupted data. The algorithm, which generalizes the recently introduced DUDE (Discrete Universal DEnoiser) of Weissman et al., aims to compete with a genie that has access, in addition to the noisy data, also to the underlying clean data, and can choose to switch, up to mm times, between sliding window denoisers in a way that minimizes the overall loss. When the underlying data form an individual sequence, we show that the S-DUDE performs essentially as well as this genie, provided that mm is sub-linear in the size of the data. When the clean data is emitted by a piecewise stationary process, we show that the S-DUDE achieves the optimum distribution-dependent performance, provided that the same sub-linearity condition is imposed on the number of switches. To further substantiate the universal optimality of the S-DUDE, we show that when the number of switches is allowed to grow linearly with the size of the data, \emph{any} (sequence of) scheme(s) fails to compete in the above senses. Using dynamic programming, we derive an efficient implementation of the S-DUDE, which has complexity (time and memory) growing only linearly with the data size and the number of switches mm. Preliminary experimental results are presented, suggesting that S-DUDE has the capacity to significantly improve on the performance attained by the original DUDE in applications where the nature of the data abruptly changes in time (or space), as is often the case in practice.Comment: 30 pages, 3 figures, submitted to IEEE Trans. Inform. Theor

    Decentralized Estimation over Orthogonal Multiple-access Fading Channels in Wireless Sensor Networks - Optimal and Suboptimal Estimators

    Get PDF
    Optimal and suboptimal decentralized estimators in wireless sensor networks (WSNs) over orthogonal multiple-access fading channels are studied in this paper. Considering multiple-bit quantization before digital transmission, we develop maximum likelihood estimators (MLEs) with both known and unknown channel state information (CSI). When training symbols are available, we derive a MLE that is a special case of the MLE with unknown CSI. It implicitly uses the training symbols to estimate the channel coefficients and exploits the estimated CSI in an optimal way. To reduce the computational complexity, we propose suboptimal estimators. These estimators exploit both signal and data level redundant information to improve the estimation performance. The proposed MLEs reduce to traditional fusion based or diversity based estimators when communications or observations are perfect. By introducing a general message function, the proposed estimators can be applied when various analog or digital transmission schemes are used. The simulations show that the estimators using digital communications with multiple-bit quantization outperform the estimator using analog-and-forwarding transmission in fading channels. When considering the total bandwidth and energy constraints, the MLE using multiple-bit quantization is superior to that using binary quantization at medium and high observation signal-to-noise ratio levels
    • 

    corecore