3,976 research outputs found
Achieving Marton's Region for Broadcast Channels Using Polar Codes
This paper presents polar coding schemes for the 2-user discrete memoryless
broadcast channel (DM-BC) which achieve Marton's region with both common and
private messages. This is the best achievable rate region known to date, and it
is tight for all classes of 2-user DM-BCs whose capacity regions are known. To
accomplish this task, we first construct polar codes for both the superposition
as well as the binning strategy. By combining these two schemes, we obtain
Marton's region with private messages only. Finally, we show how to handle the
case of common information. The proposed coding schemes possess the usual
advantages of polar codes, i.e., they have low encoding and decoding complexity
and a super-polynomial decay rate of the error probability.
We follow the lead of Goela, Abbe, and Gastpar, who recently introduced polar
codes emulating the superposition and binning schemes. In order to align the
polar indices, for both schemes, their solution involves some degradedness
constraints that are assumed to hold between the auxiliary random variables and
the channel outputs. To remove these constraints, we consider the transmission
of blocks and employ a chaining construction that guarantees the proper
alignment of the polarized indices. The techniques described in this work are
quite general, and they can be adopted to many other multi-terminal scenarios
whenever there polar indices need to be aligned.Comment: 26 pages, 11 figures, accepted to IEEE Trans. Inform. Theory and
presented in part at ISIT'1
Convex Optimization Approaches for Blind Sensor Calibration using Sparsity
We investigate a compressive sensing framework in which the sensors introduce
a distortion to the measurements in the form of unknown gains. We focus on
blind calibration, using measures performed on multiple unknown (but sparse)
signals and formulate the joint recovery of the gains and the sparse signals as
a convex optimization problem. We divide this problem in 3 subproblems with
different conditions on the gains, specifially (i) gains with different
amplitude and the same phase, (ii) gains with the same amplitude and different
phase and (iii) gains with different amplitude and phase. In order to solve the
first case, we propose an extension to the basis pursuit optimization which can
estimate the unknown gains along with the unknown sparse signals. For the
second case, we formulate a quadratic approach that eliminates the unknown
phase shifts and retrieves the unknown sparse signals. An alternative form of
this approach is also formulated to reduce complexity and memory requirements
and provide scalability with respect to the number of input signals. Finally
for the third case, we propose a formulation that combines the earlier two
approaches to solve the problem. The performance of the proposed algorithms is
investigated extensively through numerical simulations, which demonstrates that
simultaneous signal recovery and calibration is possible with convex methods
when sufficiently many (unknown, but sparse) calibrating signals are provided
Generalized Approximate Message-Passing Decoder for Universal Sparse Superposition Codes
Sparse superposition (SS) codes were originally proposed as a
capacity-achieving communication scheme over the additive white Gaussian noise
channel (AWGNC) [1]. Very recently, it was discovered that these codes are
universal, in the sense that they achieve capacity over any memoryless channel
under generalized approximate message-passing (GAMP) decoding [2], although
this decoder has never been stated for SS codes. In this contribution we
introduce the GAMP decoder for SS codes, we confirm empirically the
universality of this communication scheme through its study on various channels
and we provide the main analysis tools: state evolution and potential. We also
compare the performance of GAMP with the Bayes-optimal MMSE decoder. We
empirically illustrate that despite the presence of a phase transition
preventing GAMP to reach the optimal performance, spatial coupling allows to
boost the performance that eventually tends to capacity in a proper limit. We
also prove that, in contrast with the AWGNC case, SS codes for binary input
channels have a vanishing error floor in the limit of large codewords.
Moreover, the performance of Hadamard-based encoders is assessed for practical
implementations
Maximum Likelihood Associative Memories
Associative memories are structures that store data in such a way that it can
later be retrieved given only a part of its content -- a sort-of
error/erasure-resilience property. They are used in applications ranging from
caches and memory management in CPUs to database engines. In this work we study
associative memories built on the maximum likelihood principle. We derive
minimum residual error rates when the data stored comes from a uniform binary
source. Second, we determine the minimum amount of memory required to store the
same data. Finally, we bound the computational complexity for message
retrieval. We then compare these bounds with two existing associative memory
architectures: the celebrated Hopfield neural networks and a neural network
architecture introduced more recently by Gripon and Berrou
Deep Learning: Our Miraculous Year 1990-1991
In 2020, we will celebrate that many of the basic ideas behind the deep
learning revolution were published three decades ago within fewer than 12
months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich.
Back then, few people were interested, but a quarter century later, neural
networks based on these ideas were on over 3 billion devices such as
smartphones, and used many billions of times per day, consuming a significant
fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201
Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications
From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified
Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds
Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a small number of noisy linear measurements is an important problem in
compressed sensing. In this paper, the high-dimensional setting is considered.
It is shown that if the measurement rate and per-sample signal-to-noise ratio
(SNR) are finite constants independent of the length of the vector, then the
optimal sparsity pattern estimate will have a constant fraction of errors.
Lower bounds on the measurement rate needed to attain a desired fraction of
errors are given in terms of the SNR and various key parameters of the unknown
vector. The tightness of the bounds in a scaling sense, as a function of the
SNR and the fraction of errors, is established by comparison with existing
achievable bounds. Near optimality is shown for a wide variety of practically
motivated signal models
- …