6,088 research outputs found

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Improved Finite Blocklength Converses for Slepian-Wolf Coding via Linear Programming

    Full text link
    A new finite blocklength converse for the Slepian- Wolf coding problem is presented which significantly improves on the best known converse for this problem, due to Miyake and Kanaya [2]. To obtain this converse, an extension of the linear programming (LP) based framework for finite blocklength point- to-point coding problems from [3] is employed. However, a direct application of this framework demands a complicated analysis for the Slepian-Wolf problem. An analytically simpler approach is presented wherein LP-based finite blocklength converses for this problem are synthesized from point-to-point lossless source coding problems with perfect side-information at the decoder. New finite blocklength metaconverses for these point-to-point problems are derived by employing the LP-based framework, and the new converse for Slepian-Wolf coding is obtained by an appropriate combination of these converses.Comment: under review with the IEEE Transactions on Information Theor

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Experimental demonstration of Gaussian protocols for one-sided device-independent quantum key distribution

    Get PDF
    Nonlocal correlations, a longstanding foundational topic in quantum information, have recently found application as a resource for cryptographic tasks where not all devices are trusted, for example in settings with a highly secure central hub, such as a bank or government department, and less secure satellite stations which are inherently more vulnerable to hardware "hacking" attacks. The asymmetric phenomena of Einstein-Podolsky-Rosen steering plays a key role in one-sided device-independent quantum key distribution (1sDI-QKD) protocols. In the context of continuous-variable (CV) QKD schemes utilizing Gaussian states and measurements, we identify all protocols that can be 1sDI and their maximum loss tolerance. Surprisingly, this includes a protocol that uses only coherent states. We also establish a direct link between the relevant EPR steering inequality and the secret key rate, further strengthening the relationship between these asymmetric notions of nonlocality and device independence. We experimentally implement both entanglement-based and coherent-state protocols, and measure the correlations necessary for 1sDI key distribution up to an applied loss equivalent to 7.5 km and 3.5 km of optical fiber transmission respectively. We also engage in detailed modelling to understand the limits of our current experiment and the potential for further improvements. The new protocols we uncover apply the cheap and efficient hardware of CVQKD systems in a significantly more secure setting.Comment: Addition of experimental results and (several) new author

    Integer-Forcing Source Coding

    Full text link
    Integer-Forcing (IF) is a new framework, based on compute-and-forward, for decoding multiple integer linear combinations from the output of a Gaussian multiple-input multiple-output channel. This work applies the IF approach to arrive at a new low-complexity scheme, IF source coding, for distributed lossy compression of correlated Gaussian sources under a minimum mean squared error distortion measure. All encoders use the same nested lattice codebook. Each encoder quantizes its observation using the fine lattice as a quantizer and reduces the result modulo the coarse lattice, which plays the role of binning. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. In general, the linear combinations have smaller average powers than the original signals. This allows to increase the density of the coarse lattice, which in turn translates to smaller compression rates. We also propose and analyze a one-shot version of IF source coding, that is simple enough to potentially lead to a new design principle for analog-to-digital converters that can exploit spatial correlations between the sampled signals.Comment: Submitted to IEEE Transactions on Information Theor
    • …
    corecore