15,463 research outputs found
Golden codes: quantum LDPC codes built from regular tessellations of hyperbolic 4-manifolds
We adapt a construction of Guth and Lubotzky [arXiv:1310.5555] to obtain a
family of quantum LDPC codes with non-vanishing rate and minimum distance
scaling like where is the number of physical qubits. Similarly as
in [arXiv:1310.5555], our homological code family stems from hyperbolic
4-manifolds equipped with tessellations. The main novelty of this work is that
we consider a regular tessellation consisting of hypercubes. We exploit this
strong local structure to design and analyze an efficient decoding algorithm.Comment: 30 pages, 4 figure
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
Implementation of a Combined OFDM-Demodulation and WCDMA-Equalization Module
For a dual-mode baseband receiver for the OFDMWireless LAN andWCDMA standards, integration of the demodulation and equalization tasks on a dedicated hardware module has been investigated. For OFDM demodulation, an FFT algorithm based on cascaded twiddle factor decomposition has been selected. This type of algorithm combines high spatial and temporal regularity in the FFT data-flow graphs with a minimal number of computations. A frequency-domain algorithm based on a circulant channel approximation has been selected for WCDMA equalization. It has good performance, low hardware complexity and a low number of computations. Its main advantage is the reuse of the FFT kernel, which contributes to the integration of both tasks. The demodulation and equalization module has been described at the register transfer level with the in-house developed Arx language. The core of the module is a pipelined radix-23 butterfly combined with a complex multiplier and complex divider. The module has an area of 0.447 mm2 in 0.18 ¿m technology and a power consumption of 10.6 mW. The proposed module compares favorably with solutions reported in literature
Decoding of Non-Binary LDPC Codes Using the Information Bottleneck Method
Recently, a novel lookup table based decoding method for binary low-density
parity-check codes has attracted considerable attention. In this approach,
mutual-information maximizing lookup tables replace the conventional operations
of the variable nodes and the check nodes in message passing decoding.
Moreover, the exchanged messages are represented by integers with very small
bit width. A machine learning framework termed the information bottleneck
method is used to design the corresponding lookup tables. In this paper, we
extend this decoding principle from binary to non-binary codes. This is not a
straightforward extension, but requires a more sophisticated lookup table
design to cope with the arithmetic in higher order Galois fields. Provided bit
error rate simulations show that our proposed scheme outperforms the log-max
decoding algorithm and operates close to sum-product decoding.Comment: This paper has been presented at IEEE International Conference on
Communications (ICC'19) in Shangha
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
Wildcard dimensions, coding theory and fault-tolerant meshes and hypercubes
Hypercubes, meshes and tori are well known interconnection networks for parallel computers. The sets of edges in those graphs can be partitioned to dimensions. It is well known that the hypercube can be extended by adding a wildcard dimension resulting in a folded hypercube that has better fault-tolerant and communication capabilities. First we prove that the folded hypercube is optimal in the sense that only a single wildcard dimension can be added to the hypercube. We then investigate the idea of adding wildcard dimensions to d-dimensional meshes and tori. Using techniques from error correcting codes we construct d-dimensional meshes and tori with wildcard dimensions. Finally, we show how these constructions can be used to tolerate edge and node faults in mesh and torus networks
- …