85 research outputs found

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Experimental study of the interplay of channel and network coding in low power sensor applications

    Get PDF
    In this paper, we evaluate the performance of random linear network coding (RLNC) in low data rate indoor sensor applications operating in the ISM frequency band. We also investigate the results of its synergy with forward error correction (FEC) codes at the PHY-layer in a joint channel-network coding (JCNC) scheme. RLNC is an emerging coding technique which can be used as a packet-level erasure code, usually implemented at the network layer, which increases data reliability against channel fading and severe interference, while FEC codes are mainly used for correction of random bit errors within a received packet. The hostile wireless environment that low power sensors usually operate in, with significant interference from nearby networks, motivates us to consider a joint coding scheme and examine the applicability of RLNC as an erasure code in such a coding structure. Our analysis and experiments are performed using a custom low power sensor node, which integrates on-chip a low-power 2.4 GHz transmitter and an accelerator implementing a multi-rate convolutional code and RLNC, in a typical office environment. According to measurement results, RLNC of code rate 4/8 can provide an effective SNR improvement of about 3.4 dB, outperforming a PHY-layer FEC code of the same code rate, at a PER of 10[superscript -2]. In addition, RLNC performs very well when used in conjunction with a PHY-layer FEC code as a JCNC scheme, offering an overall coding gain of 5.6 dB.Focus Center Research Program. Focus Center for Circuit & System Solutions. Semiconductor Research Corporation. Interconnect Focus Cente

    Early FM Radio

    Get PDF
    The commonly accepted history of FM radio is one of the twentieth century’s iconic sagas of invention, heroism, and tragedy. Edwin Howard Armstrong created a system of wideband frequency-modulation radio in 1933. The Radio Corporation of America (RCA), convinced that Armstrong’s system threatened its AM empire, failed to develop the new technology and refused to pay Armstrong royalties. Armstrong sued the company at great personal cost. He died despondent, exhausted, and broke. But this account, according to Gary L. Frost, ignores the contributions of scores of other individuals who were involved in the decades-long struggle to realize the potential of FM radio. The first scholar to fully examine recently uncovered evidence from the Armstrong v. RCA lawsuit, Frost offers a thorough revision of the FM story. Frost’s balanced, contextualized approach provides a much-needed corrective to previous accounts. Navigating deftly through the details of a complicated story, he examines the motivations and interactions of the three communities most intimately involved in the development of the technology—Progressive-era amateur radio operators, RCA and Westinghouse engineers, and early FM broadcasters. In the process, Frost demonstrates the tension between competition and collaboration that goes hand in hand with the emergence and refinement of new technologies. Frost's study reconsiders both the social construction of FM radio and the process of technological evolution. Historians of technology, communication, and media will welcome this important reexamination of the canonic story of early FM radio

    When all information is not created equal

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 191-196).Following Shannon's landmark paper, the classical theoretical framework for communication is based on a simplifying assumption that all information is equally important, thus aiming to provide a uniform protection to all information. However, this homogeneous view of information is not suitable for a variety of modern-day communication scenarios such as wireless and sensor networks, video transmission, interactive systems, and control applications. For example, an emergency alarm from a sensor network needs more protection than other transmitted information. Similarly, the coarse resolution of an image needs better protection than its finer details. For such heterogeneous information, if providing a uniformly high protection level to all parts of the information is infeasible, it is desirable to provide different protection levels based on the importance of those parts. The main objective of this thesis is to extend classical information theory to address this heterogeneous nature of information. Many theoretical tools needed for this are fundamentally different from the conventional homogeneous setting. One key issue is that bits are no more a sufficient measure of information. We develop a general framework for understanding the fundamental limits of transmitting such information, calculate such fundamental limits, and provide optimal architectures for achieving these limits. Our analysis shows that even without sacrificing the data-rate from channel capacity, some crucial parts of information can be protected with exponential reliability. This research would challenge the notion that a set of homogenous bits should necessarily be viewed as a universal interface to the physical layer; this potentially impacts the design of network architectures. This thesis also develops two novel approaches for simplifying such difficult problems in information theory. Our formulations are based on ideas from graphical models and Euclidean geometry and provide canonical examples for network information theory. They provide fresh insights into previously intractable problems as well as generalize previous related results.by Shashibhushan Prataprao Borade.Ph.D

    Wireless transmission protocols using relays for broadcast and information exchange channels

    No full text
    Relays have been used to overcome existing network performance bottlenecks in meeting the growing demand for large bandwidth and high quality of service (QoS) in wireless networks. This thesis proposes several wireless transmission protocols using relays in practical multi-user broadcast and information exchange channels. The main theme is to demonstrate that efficient use of relays provides an additional dimension to improve reliability, throughput, power efficiency and secrecy. First, a spectrally efficient cooperative transmission protocol is proposed for the multiple-input and singleoutput (MISO) broadcast channel to improve the reliability of wireless transmission. The proposed protocol mitigates co-channel interference and provides another dimension to improve the diversity gain. Analytical and simulation results show that outage probability and the diversity and multiplexing tradeoff of the proposed cooperative protocol outperforms the non-cooperative scheme. Second, a two-way relaying protocol is proposed for the multi-pair, two-way relaying channel to improve the throughput and reliability. The proposed protocol enables both the users and the relay to participate in interference cancellation. Several beamforming schemes are proposed for the multi-antenna relay. Analytical and simulation results reveal that the proposed protocol delivers significant improvements in ergodic capacity, outage probability and the diversity and multiplexing tradeoff if compared to existing schemes. Third, a joint beamforming and power management scheme is proposed for multiple-input and multiple-output (MIMO) two-way relaying channel to improve the sum-rate. Network power allocation and power control optimisation problems are formulated and solved using convex optimisation techniques. Simulation results verify that the proposed scheme delivers better sum-rate or consumes lower power when compared to existing schemes. Fourth, two-way secrecy schemes which combine one-time pad and wiretap coding are proposed for the scalar broadcast channel to improve secrecy rate. The proposed schemes utilise the channel reciprocity and employ relays to forward secret messages. Analytical and simulation results reveal that the proposed schemes are able to achieve positive secrecy rates even when the number of users is large. All of these new wireless transmission protocols help to realise better throughput, reliability, power efficiency and secrecy for wireless broadcast and information exchange channels through the efficient use of relays

    Practical interference management strategies in Gaussian networks

    Get PDF
    Increasing demand for bandwidth intensive activities on high-penetration wireless hand-held personal devices, combined with their processing power and advanced radio features, has necessitated a new look at the problems of resource provisioning and distributed management of coexistence in wireless networks. Information theory, as the science of studying the ultimate limits of communication e ciency, plays an important role in outlining guiding principles in the design and analysis of such communication schemes. Network information theory, the branch of information theory that investigates problems of multiuser and distributed nature in information transmission is ideally poised to answer questions about the design and analysis of multiuser communication systems. In the past few years, there have been major advances in network information theory, in particular in the generalized degrees of freedom framework for asymptotic analysis and interference alignment which have led to constant gap to capacity results for Gaussian interference channels. Unfortunately, practical adoption of these results has been slowed by their reliance on unrealistic assumptions like perfect channel state information at the transmitter and intricate constructions based on alignment over transcendental dimensions of real numbers. It is therefore necessary to devise transmission methods and coexistence schemes that fall under the umbrella of existing interference management and cognitive radio toolbox and deliver close to optimal performance. In this thesis we work on the theme of designing and characterizing the performance of conceptually simple transmission schemes that are robust and achieve performance that is close to optimal. In particular, our work is broadly divided into two parts. In the rst part, looking at cognitive radio networks, we seek to relax the assumption of non-causal knowledge of primary user's message at the secondary user's transmitter. We study a cognitive channel model based on Gaussian interference channel that does not assume anything about users other than primary user's priority over secondary user in reaching its desired quality of service. We characterize this quality of service requirement as a minimum rate that the primary user should be able to achieve. Studying the achievable performance of simple encoding and decoding schemes in this scenario, we propose a few di erent simple encoding schemes and explore di erent decoder designs. We show that surprisingly, all these schemes achieve the same rate region. Next, we study the problem of rate maximization faced by the secondary user subject to primary's QoS constraint. We show that this problem is not convex or smooth in general. We then use the symmetry properties of the problem to reduce its solution to a feasibly implementable line search. We also provide numerical results to demonstrate the performance of the scheme. Continuing on the theme of simple yet well-performing schemes for wireless networks, in the second part of the thesis, we direct our attention from two-user cognitive networks to the problem of smart interference management in large wireless networks. Here, we study the problem of interference-aware wireless link scheduling. Link scheduling is the problem of allocating a set of transmission requests into as small a set of time slots as possible such that all transmissions satisfy some condition of feasibility. The feasibility criterion has traditionally been lack of pair of links that interfere too much. This makes the problem amenable to solution using graph theoretical tools. Inspired by the recent results that the simple approach of treating interference as noise achieves maximal Generalized Degrees of Freedom (which is a measure that roughly captures how many equivalent single-user channels are contained in a given multi-user channel) and the generalization that it can attain rates within a constant gap of the capacity for a large class of Gaussian interference networks, we study the problem of scheduling links under a set Signal to Interference plus Noise Ratio (SINR) constraint. We show that for nodes distributed in a metric space and obeying path loss channel model, a re ned framework based on combining geometric and graph theoretic results can be devised to analyze the problem of nding the feasible sets of transmissions for a given level of desired SINR. We use this general framework to give a link scheduling algorithm that is provably within a logarithmic factor of the best possible schedule. Numerical simulations con rm that this approach outperforms other recently proposed SINR-based approaches. Finally, we conclude by identifying open problems and possible directions for extending these results

    ATS F and G /phases B and C/, volume 1 Final report

    Get PDF
    Design parameters and program objectives of Applications Technology Satellites 7 and
    • …
    corecore