676 research outputs found

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Multi-user lattice coding for the multiple-access relay channel

    Full text link
    This paper considers the multi-antenna multiple access relay channel (MARC), in which multiple users transmit messages to a common destination with the assistance of a relay. In a variety of MARC settings, the dynamic decode and forward (DDF) protocol is very useful due to its outstanding rate performance. However, the lack of good structured codebooks so far hinders practical applications of DDF for MARC. In this work, two classes of structured MARC codes are proposed: 1) one-to-one relay-mapper aided multiuser lattice coding (O-MLC), and 2) modulo-sum relay-mapper aided multiuser lattice coding (MS-MLC). The former enjoys better rate performance, while the latter provides more flexibility to tradeoff between the complexity of the relay mapper and the rate performance. It is shown that, in order to approach the rate performance achievable by an unstructured codebook with maximum-likelihood decoding, it is crucial to use a new K-stage coset decoder for structured O-MLC, instead of the one-stage decoder proposed in previous works. However, if O-MLC is decoded with the one-stage decoder only, it can still achieve the optimal DDF diversity-multiplexing gain tradeoff in the high signal-to-noise ratio regime. As for MS-MLC, its rate performance can approach that of the O-MLC by increasing the complexity of the modulo-sum relay-mapper. Finally, for practical implementations of both O-MLC and MS-MLC, practical short length lattice codes with linear mappers are designed, which facilitate efficient lattice decoding. Simulation results show that the proposed coding schemes outperform existing schemes in terms of outage probabilities in a variety of channel settings.Comment: 32 pages, 5 figure

    MIMO Systems

    Get PDF
    In recent years, it was realized that the MIMO communication systems seems to be inevitable in accelerated evolution of high data rates applications due to their potential to dramatically increase the spectral efficiency and simultaneously sending individual information to the corresponding users in wireless systems. This book, intends to provide highlights of the current research topics in the field of MIMO system, to offer a snapshot of the recent advances and major issues faced today by the researchers in the MIMO related areas. The book is written by specialists working in universities and research centers all over the world to cover the fundamental principles and main advanced topics on high data rates wireless communications systems over MIMO channels. Moreover, the book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    The price of certainty: "waterslide curves" and the gap to capacity

    Full text link
    The classical problem of reliable point-to-point digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional information-theoretic analysis uses `waterfall' curves to convey the revolutionary idea that unboundedly low probabilities of bit-error are attainable using only finite transmit power. However, practitioners have long observed that the decoder complexity, and hence the total power consumption, goes up when attempting to use sophisticated codes that operate close to the waterfall curve. This paper gives an explicit model for power consumption at an idealized decoder that allows for extreme parallelism in implementation. The decoder architecture is in the spirit of message passing and iterative decoding for sparse-graph codes. Generalized sphere-packing arguments are used to derive lower bounds on the decoding power needed for any possible code given only the gap from the Shannon limit and the desired probability of error. As the gap goes to zero, the energy per bit spent in decoding is shown to go to infinity. This suggests that to optimize total power, the transmitter should operate at a power that is strictly above the minimum demanded by the Shannon capacity. The lower bound is plotted to show an unavoidable tradeoff between the average bit-error probability and the total power used in transmission and decoding. In the spirit of conventional waterfall curves, we call these `waterslide' curves.Comment: 37 pages, 13 figures. Submitted to IEEE Transactions on Information Theory. This version corrects a subtle bug in the proofs of the original submission and improves the bounds significantl

    Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014

    Get PDF
    Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem

    Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014

    Get PDF
    Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem

    Spatial Statistical Data Fusion on Java-enabled Machines in Ubiquitous Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSN) consist of small, cheap devices that have a combination of sensing, computing and communication capabilities. They must be able to communicate and process data efficiently using minimum amount of energy and cover an area of interest with the minimum number of sensors. This thesis proposes the use of techniques that were designed for Geostatistics and applies them to WSN field. Kriging and Cokriging interpolation that can be considered as Information Fusion algorithms were tested to prove the feasibility of the methods to increase coverage. To reduce energy consumption, a compression method that models correlations based on variograms was developed. A second challenge is to establish the communication to the external networks and to react to unexpected events. A demonstrator that uses commercial Java-enabled devices was implemented. It is able to perform remote monitoring, send SMS alarms and deploy remote updates
    • …
    corecore