84 research outputs found

    On Sampling and Coding for Distributed Acoustic Sensing

    Get PDF
    The issue of how to efficiently represent the data collected by a network of microphones recording spatio-temporal acoustic wave fields is addressed. Each sensor node in the network samples the sound field, quantizes the samples and transmits the encoded samples to some central unit, which computes an estimate of the original sound field based on the information received from all the microphones. Our analysis is based on the spectral properties of the sound field, which are induced by the physics of wave propagation and have a significant impact on the efficiency of the chosen sampling lattice and coding scheme. As field acquisition by a sensor network typically implies spatio-temporal sampling of the field, a multidimensional sampling theorem for homogeneous random fields with compactly supported spectral measures is proved. To assess the loss of information implied by source coding, rate distortion functions for various coding schemes and sampling lattices are determined. In particular, centralized coding, independent coding and some multiterminal schemes are compared. Under the assumption of spectral whiteness of the sound field, it is shown that sampling with a quincunx lattice followed by independent coding is optimal as it achieves the lower bound given by centralized coding

    Universal Sampling Rate Distortion

    Full text link
    We examine the coordinated and universal rate-efficient sampling of a subset of correlated discrete memoryless sources followed by lossy compression of the sampled sources. The goal is to reconstruct a predesignated subset of sources within a specified level of distortion. The combined sampling mechanism and rate distortion code are universal in that they are devised to perform robustly without exact knowledge of the underlying joint probability distribution of the sources. In Bayesian as well as nonBayesian settings, single-letter characterizations are provided for the universal sampling rate distortion function for fixed-set sampling, independent random sampling and memoryless random sampling. It is illustrated how these sampling mechanisms are successively better. Our achievability proofs bring forth new schemes for joint source distribution-learning and lossy compression

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Sensing and communication with and without bits

    Get PDF
    The successful design of sensor network architectures depends crucially on the structure of the sampling, observation, and communication processes. One of the most fundamental ques- tions concerns the sufficiency of discrete approximations in time, space, and amplitude. In the case of space and time, the question can be rephrased as whether there is a spatio- temporal sampling theorem for typical data sets in sensor net- works. This question has a positive answer in many cases of interest. The issue of discretization of amplitudes is more subtle and can be expressed as the question of whether there is a (source/channel) separation theorem for typical sensor networks. We show that this question has a negative answer in general and that the price of separation can be large. To illustrate these issues, we review the underlying theory and discuss specific examples

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Summary of the Activities of the Purdue Electric Power Center 1987

    Get PDF
    • …
    corecore