31,260 research outputs found
List Decoding of Direct Sum Codes
We consider families of codes obtained by "lifting" a base code
through operations such as -XOR applied to "local views" of codewords of
, according to a suitable -uniform hypergraph. The -XOR
operation yields the direct sum encoding used in works of [Ta-Shma, STOC 2017]
and [Dinur and Kaufman, FOCS 2017].
We give a general framework for list decoding such lifted codes, as long as
the base code admits a unique decoding algorithm, and the hypergraph used for
lifting satisfies certain expansion properties. We show that these properties
are satisfied by the collection of length walks on an expander graph, and
by hypergraphs corresponding to high-dimensional expanders. Instantiating our
framework, we obtain list decoding algorithms for direct sum liftings on the
above hypergraph families. Using known connections between direct sum and
direct product, we also recover the recent results of Dinur et al. [SODA 2019]
on list decoding for direct product liftings.
Our framework relies on relaxations given by the Sum-of-Squares (SOS) SDP
hierarchy for solving various constraint satisfaction problems (CSPs). We view
the problem of recovering the closest codeword to a given word, as finding the
optimal solution of a CSP. Constraints in the instance correspond to edges of
the lifting hypergraph, and the solutions are restricted to lie in the base
code . We show that recent algorithms for (approximately) solving
CSPs on certain expanding hypergraphs also yield a decoding algorithm for such
lifted codes.
We extend the framework to list decoding, by requiring the SOS solution to
minimize a convex proxy for negative entropy. We show that this ensures a
covering property for the SOS solution, and the "condition and round" approach
used in several SOS algorithms can then be used to recover the required list of
codewords.Comment: Full version of paper from SODA 202
Fixed Error Asymptotics For Erasure and List Decoding
We derive the optimum second-order coding rates, known as second-order
capacities, for erasure and list decoding. For erasure decoding for discrete
memoryless channels, we show that second-order capacity is
where is the channel dispersion and
is the total error probability, i.e., the sum of the erasure and
undetected errors. We show numerically that the expected rate at finite
blocklength for erasures decoding can exceed the finite blocklength channel
coding rate. We also show that the analogous result also holds for lossless
source coding with decoder side information, i.e., Slepian-Wolf coding. For
list decoding, we consider list codes of deterministic size that scales as
and show that the second-order capacity is
where is the permissible error
probability. We also consider lists of polynomial size and derive
bounds on the third-order coding rate in terms of the order of the polynomial
. These bounds are tight for symmetric and singular channels. The
direct parts of the coding theorems leverage on the simple threshold decoder
and converses are proved using variants of the hypothesis testing converse.Comment: 18 pages, 1 figure; Submitted to IEEE Transactions on Information
Theory; Shorter version to be presented at ISIT 201
Homomorphism Extension
We define the Homomorphism Extension (HomExt) problem: given a group , a
subgroup and a homomorphism , decide whether or
not there exists a homomorphism extending
, i.e., . This problem arose in the
context of list-decoding homomorphism codes but is also of independent
interest, both as a problem in computational group theory and as a new and
natural problem in NP of unsettled complexity status.
We consider the case (the symmetric group of degree ), i.e.,
is a -action on a set of elements. We assume is given as a permutation group by a list of generators. We characterize
the equivalence classes of extensions in terms of a multidimensional oracle
subset-sum problem. From this we infer that for bounded the HomExt problem
can be solved in polynomial time.
Our main result concerns the case (the alternating group of degree
) for variable under the assumption that the index of in is
bounded by poly. We solve this case in polynomial time for all . This is the case with direct relevance to homomorphism codes
(Babai, Black, and Wuu, arXiv 2018); it is used as a component of one of the
main algorithms in that paper.Comment: 29 page
SISO APP Searches in Lattices with Tanner Graphs
An efficient, low-complexity, soft-output detector for general lattices is
presented, based on their Tanner graph (TG) representations. Closest-point
searches in lattices can be performed as non-binary belief propagation on
associated TGs; soft-information output is naturally generated in the process;
the algorithm requires no backtrack (cf. classic sphere decoding), and extracts
extrinsic information. A lattice's coding gain enables equivalence relations
between lattice points, which can be thereby partitioned in cosets. Total and
extrinsic a posteriori probabilities at the detector's output further enable
the use of soft detection information in iterative schemes. The algorithm is
illustrated via two scenarios that transmit a 32-point, uncoded
super-orthogonal (SO) constellation for multiple-input multiple-output (MIMO)
channels, carved from an 8-dimensional non-orthogonal lattice (a direct sum of
two 4-dimensional checkerboard lattice): it achieves maximum likelihood
performance in quasistatic fading; and, performs close to interference-free
transmission, and identically to list sphere decoding, in independent fading
with coordinate interleaving and iterative equalization and detection. Latter
scenario outperforms former despite the absence of forward error correction
coding---because the inherent lattice coding gain allows for the refining of
extrinsic information. The lattice constellation is the same as the one
employed in the SO space-time trellis codes first introduced for 2-by-2 MIMO by
Ionescu et al., then independently by Jafarkhani and Seshadri. Complexity is
log-linear in lattice dimensionality, vs. cubic in sphere decoders.Comment: 15 pages, 6 figures, 2 tables, uses IEEEtran.cl
A hybrid partial sum computation unit architecture for list decoders of polar codes
Although the successive cancelation (SC) algorithm works well for very long
polar codes, its error performance for shorter polar codes is much worse.
Several SC based list decoding algorithms have been proposed to improve the
error performances of both long and short polar codes. A significant step of SC
based list decoding algorithms is the updating of partial sums for all decoding
paths. In this paper, we first proposed a lazy copy partial sum computation
algorithm for SC based list decoding algorithms. Instead of copying partial
sums directly, our lazy copy algorithm copies indices of partial sums. Based on
our lazy copy algorithm, we propose a hybrid partial sum computation unit
architecture, which employs both registers and memories so that the overall
area efficiency is improved. Compared with a recent partial sum computation
unit for list decoders, when the list size , our partial sum computation
unit achieves an area saving of 23\% and 63\% for block length and
, respectively.Comment: 5 pages, presented at the 2015 IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP
Symbol-Decision Successive Cancellation List Decoder for Polar Codes
Polar codes are of great interests because they provably achieve the capacity
of both discrete and continuous memoryless channels while having an explicit
construction. Most existing decoding algorithms of polar codes are based on
bit-wise hard or soft decisions. In this paper, we propose symbol-decision
successive cancellation (SC) and successive cancellation list (SCL) decoders
for polar codes, which use symbol-wise hard or soft decisions for higher
throughput or better error performance. First, we propose to use a recursive
channel combination to calculate symbol-wise channel transition probabilities,
which lead to symbol decisions. Our proposed recursive channel combination also
has a lower complexity than simply combining bit-wise channel transition
probabilities. The similarity between our proposed method and Arikan's channel
transformations also helps to share hardware resources between calculating bit-
and symbol-wise channel transition probabilities. Second, a two-stage list
pruning network is proposed to provide a trade-off between the error
performance and the complexity of the symbol-decision SCL decoder. Third, since
memory is a significant part of SCL decoders, we propose a pre-computation
memory-saving technique to reduce memory requirement of an SCL decoder.
Finally, to evaluate the throughput advantage of our symbol-decision decoders,
we design an architecture based on a semi-parallel successive cancellation list
decoder. In this architecture, different symbol sizes, sorting implementations,
and message scheduling schemes are considered. Our synthesis results show that
in terms of area efficiency, our symbol-decision SCL decoders outperform both
bit- and symbol-decision SCL decoders.Comment: 13 pages, 17 figure
Recursive Decoding and Its Performance for Low-Rate Reed-Muller Codes
Recursive decoding techniques are considered for Reed-Muller (RM) codes of
growing length and fixed order An algorithm is designed that has
complexity of order and corrects most error patterns of weight up to
given that exceeds This
improves the asymptotic bounds known for decoding RM codes with nonexponential
complexity
Fast Maximum-Likelihood Decoding of the Golden Code
The golden code is a full-rate full-diversity space-time code for two
transmit antennas that has a maximal coding gain. Because each codeword conveys
four information symbols from an M-ary quadrature-amplitude modulation
alphabet, the complexity of an exhaustive search decoder is proportional to
M^2. In this paper we present a new fast algorithm for maximum-likelihood
decoding of the golden code that has a worst-case complexity of only O(2M^2.5).
We also present an efficient implementation of the fast decoder that exhibits a
low average complexity. Finally, in contrast to the overlaid Alamouti codes,
which lose their fast decodability property on time-varying channels, we show
that the golden code is fast decodable on both quasistatic and rapid
time-varying channels.Comment: Submitted to IEEE Trans. on Wireless, November 200
Joint error correction enhancement of the fountain codes concept
Fountain codes like LT or Raptor codes, also known as rateless erasure codes,
allow to encode a message as some number of packets, such that any large enough
subset of these packets is sufficient to fully reconstruct the message. It
requires undamaged packets, while the packets which were not lost are usually
damaged in real scenarios. Hence, an additional error correction layer is often
required: adding some level of redundancy to each packet to be able to repair
eventual damages. This approach requires a priori knowledge of the final damage
level of every packet - insufficient redundancy leads to packet loss,
overprotection means suboptimal channel rate. However, the sender may have
inaccurate or even no a priori information about the final damage levels, for
example in applications like broadcasting, degradation of a storage medium or
damage of picture watermarking.
Joint Reconstruction Codes (JRC) setting is introduced and discussed in this
paper for the purpose of removing the need of a priori knowledge of damage
level and sub-optimality caused by overprotection and discarding underprotected
packets. It is obtained by combining both processes: reconstruction from
multiple packets and forward error correction. The decoder combines the
resultant informational content of all received packets accordingly to their
actual noise level, which can be estimated a posteriori individually for each
packet. Assuming binary symmetric channel (BSC) of bit-flip
probability, every potentially damaged bit carries
bits of information, where is the Shannon
entropy. The minimal requirement to fully reconstruct the message is that the
sum of rate over all bits is at least the size of the message.
We will discuss sequential decoding for the reconstruction purpose, which
statistical behavior can be estimated using Renyi entropy.Comment: 14 pages, 9 figure
Reduce the Complexity of List Decoding of Polar Codes by Tree-Pruning
Polar codes under cyclic redundancy check aided successive cancellation list
(CA-SCL) decoding can outperform the turbo codes and the LDPC codes when code
lengths are configured to be several kilobits. In order to reduce the decoding
complexity, a novel tree-pruning scheme for the \mbox{SCL/CA-SCL} decoding
algorithms is proposed in this paper. In each step of the decoding procedure,
the candidate paths with metrics less than a threshold are dropped directly to
avoid the unnecessary computations for the path searching on the descendant
branches of them. Given a candidate path, an upper bound of the path metric of
its descendants is proposed to determined whether the pruning of this candidate
path would affect frame error rate (FER) performance. By utilizing this upper
bounding technique and introducing a dynamic threshold, the proposed scheme
deletes the redundant candidate paths as many as possible while keeping the
performance deterioration in a tolerant region, thus it is much more efficient
than the existing pruning scheme. With only a negligible loss of FER
performance, the computational complexity of the proposed pruned decoding
scheme is only about of the standard algorithm in the low
signal-to-noise ratio (SNR) region (where the FER under CA-SCL decoding is
about ), and it can be very close to that of the successive
cancellation (SC) decoder in the moderate and high SNR regions
- …