87 research outputs found
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be
cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires c€_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N
Applications of Coding Theory to Massive Multiple Access and Big Data Problems
The broad theme of this dissertation is design of schemes that admit iterative algorithms
with low computational complexity to some new problems arising in massive
multiple access and big data. Although bipartite Tanner graphs and low-complexity
iterative algorithms such as peeling and message passing decoders are very popular
in the channel coding literature they are not as widely used in the respective areas
of study and this dissertation serves as an important step in that direction to bridge
that gap. The contributions of this dissertation can be categorized into the following
three parts.
In the first part of this dissertation, a timely and interesting multiple access
problem for a massive number of uncoordinated devices is considered wherein the
base station is interested only in recovering the list of messages without regard to the
identity of the respective sources. A coding scheme with polynomial encoding and
decoding complexities is proposed for this problem, the two main features of which
are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple
access channel and (ii) successive interference cancellation decoder. The proposed
coding scheme not only improves on the performance of the previously best known
coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian
coding information rate.
In the second part construction-D lattices are constructed where the underlying
linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC)
codes with uniform left and right degrees. It is shown that the proposed
lattices achieve the Poltyrev limit under multistage belief propagation decoding.
Leveraging this result lattice codes constructed from these lattices are applied to the
three user symmetric interference channel. For channel gains within 0.39 dB from
the very strong interference regime, the proposed lattice coding scheme with the
iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB
away the Shannon limit.
The third part focuses on support recovery in compressed sensing and the nonadaptive
group testing (GT) problems. Prior to this work, sensing schemes based on
left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling
decoder were proposed for the above problems. These schemes require O(K logN)
and Ω(K logK logN) measurements respectively to recover the sparse signal with
high probability (w.h.p), where N, K denote the dimension and sparsity of the signal
respectively (K (double backward arrow) N). Also the number of measurements required to recover
at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be
cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and-
right regular bipartite graph based sensing schemes are analyzed. It is shown
that this design strategy enables to achieve superior and sharper results. For the
support recovery problem, the number of measurements is reduced to the optimal
lower bound of
Ω (K log N/K). Similarly for the approximate GT, proposed scheme
only requires c€_K log N/
K measurements. For the probabilistic GT, proposed scheme
requires (K logK log vN/
K) measurements which is only log K factor away from the
best known lower bound of Ω (K log N/
K). Apart from the asymptotic regime, the proposed
schemes also demonstrate significant improvement in the required number of
measurements for finite values of K, N
Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity
Given an -length input signal \mbf{x}, it is well known that its
Discrete Fourier Transform (DFT), \mbf{X}, can be computed in
complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is
exactly -sparse (where ), can we do better? We show that
asymptotically in and , when is sub-linear in (precisely, where ), and the support of the non-zero DFT
coefficients is uniformly random, we can exploit this sparsity in two
fundamental ways (i) {\bf {sample complexity}}: we need only
deterministically chosen samples of the input signal \mbf{x} (where
when ); and (ii) {\bf {computational complexity}}: we can
reliably compute the DFT \mbf{X} using operations, where the
constants in the big Oh are small and are related to the constants involved in
computing a small number of DFTs of length approximately equal to the sparsity
parameter . Our algorithm succeeds with high probability, with the
probability of failure vanishing to zero asymptotically in the number of
samples acquired, .Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke
Analysis of Spatially-Coupled Counter Braids
A counter braid (CB) is a novel counter architecture introduced by Lu et al.
in 2007 for per-flow measurements on high-speed links. CBs achieve an
asymptotic compression rate (under optimal decoding) that matches the entropy
lower bound of the flow size distribution. Spatially-coupled CBs (SC-CBs) have
recently been proposed. In this work, we further analyze single-layer CBs and
SC-CBs using an equivalent bipartite graph representation of CBs. On this
equivalent representation, we show that the potential and area thresholds are
equal. We also show that the area under the extended belief propagation (BP)
extrinsic information transfer curve (defined for the equivalent graph),
computed for the expected residual CB graph when a peeling decoder equivalent
to the BP decoder stops, is equal to zero precisely at the area threshold.
This, combined with simulations and an asymptotic analysis of the Maxwell
decoder, leads to the conjecture that the area threshold is in fact equal to
the Maxwell decoding threshold and hence a lower bound on the maximum a
posteriori (MAP) decoding threshold. Finally, we present some numerical results
and give some insight into the apparent gap of the BP decoding threshold of
SC-CBs to the conjectured lower bound on the MAP decoding threshold.Comment: To appear in the IEEE Information Theory Workshop, Jeju Island,
Korea, October 201
- …