195 research outputs found

    A Novel Stochastic Decoding of LDPC Codes with Quantitative Guarantees

    Full text link
    Low-density parity-check codes, a class of capacity-approaching linear codes, are particularly recognized for their efficient decoding scheme. The decoding scheme, known as the sum-product, is an iterative algorithm consisting of passing messages between variable and check nodes of the factor graph. The sum-product algorithm is fully parallelizable, owing to the fact that all messages can be update concurrently. However, since it requires extensive number of highly interconnected wires, the fully-parallel implementation of the sum-product on chips is exceedingly challenging. Stochastic decoding algorithms, which exchange binary messages, are of great interest for mitigating this challenge and have been the focus of extensive research over the past decade. They significantly reduce the required wiring and computational complexity of the message-passing algorithm. Even though stochastic decoders have been shown extremely effective in practice, the theoretical aspect and understanding of such algorithms remains limited at large. Our main objective in this paper is to address this issue. We first propose a novel algorithm referred to as the Markov based stochastic decoding. Then, we provide concrete quantitative guarantees on its performance for tree-structured as well as general factor graphs. More specifically, we provide upper-bounds on the first and second moments of the error, illustrating that the proposed algorithm is an asymptotically consistent estimate of the sum-product algorithm. We also validate our theoretical predictions with experimental results, showing we achieve comparable performance to other practical stochastic decoders.Comment: This paper has been submitted to IEEE Transactions on Information Theory on May 24th 201

    Sparse neural networks with large learning diversity

    Full text link
    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory

    Changing edges in graphical model algorithms

    Get PDF
    Graphical models are used to describe the interactions in structures, such as the nodes in decoding circuits, agents in small-world networks, and neurons in our brains. These structures are often not static and can change over time, resulting in removal of edges, extra nodes, or changes in weights of the links in the graphs. For example, wires in message-passing decoding circuits can be misconnected due to process variation in nanoscale manufacturing or circuit aging, the style of passes among soccer players can change based on the team's strategy, and the connections among neurons can be broken due to Alzheimer's disease. The effects of these changes in graphs can reveal useful information and inspire approaches to understand some challenging problems. In this work, we investigate the dynamic changes of edges in graphs and develop mathematical tools to analyze the effects of these changes by embedding the graphical models in two applications. The first half of the work is about the performance of message-passing LDPC decoders in the presence of permanently and transiently missing connections, which is equivalent to the removal of edges in the codes' graphical representation Tanner graphs. We prove concentration and convergence theorems that validate the use of density evolution performance analysis and conclude that arbitrarily small error probability is not possible for decoders with missing connections. However, we find suitably defined decoding thresholds for communication systems with binary erasure channels under peeling decoding, as well as binary symmetric channels under Gallager A and B decoding. We see that decoding is robust to missing wires, as decoding thresholds degrade smoothly. Surprisingly, we discovered the stochastic facilitation (SF) phenomenon in Gallager B decoders where having more missing connections helps improve the decoding thresholds under some conditions. The second half of the work is about the advantages of the semi-metric property of complex weighted networks. Nodes in graphs represent elements in systems and edges describe the level of interactions among the nodes. A semi-metric edge in a graph, which violates the triangle inequality, indicates that there is another latent relation between the pair of nodes connected by the edge. We show the equivalence between modelling a sporting event using a stochastic Markov chain and an algebraic diffusion process, and we also show that using the algebraic representation to calculate the stationary distribution of a network can preserve the graph's semi-metric property, which is lost in stochastic models. These semi-metric edges can be treated as redundancy and be pruned in the all-pairs shortest-path problems to accelerate computations, which can be applied to more complicated problems such as PageRank. We then further demonstrate the advantages of semi-metricity in graphs by showing that the percentage of semi-metric edges in the interaction graphs of two soccer teams changes linearly with the final score. Interestingly, these redundant edges can be interpreted as a measure of a team's tactics

    Towards Quantum Belief Propagation for LDPC Decoding in Wireless Networks

    Full text link
    We present Quantum Belief Propagation (QBP), a Quantum Annealing (QA) based decoder design for Low Density Parity Check (LDPC) error control codes, which have found many useful applications in Wi-Fi, satellite communications, mobile cellular systems, and data storage systems. QBP reduces the LDPC decoding to a discrete optimization problem, then embeds that reduced design onto quantum annealing hardware. QBP's embedding design can support LDPC codes of block length up to 420 bits on real state-of-the-art QA hardware with 2,048 qubits. We evaluate performance on real quantum annealer hardware, performing sensitivity analyses on a variety of parameter settings. Our design achieves a bit error rate of 10−810^{-8} in 20 μ\mus and a 1,500 byte frame error rate of 10−610^{-6} in 50 μ\mus at SNR 9 dB over a Gaussian noise wireless channel. Further experiments measure performance over real-world wireless channels, requiring 30 μ\mus to achieve a 1,500 byte 99.99%\% frame delivery rate at SNR 15-20 dB. QBP achieves a performance improvement over an FPGA based soft belief propagation LDPC decoder, by reaching a bit error rate of 10−810^{-8} and a frame error rate of 10−610^{-6} at an SNR 2.5--3.5 dB lower. In terms of limitations, QBP currently cannot realize practical protocol-sized (e.g.,\textit{e.g.,} Wi-Fi, WiMax) LDPC codes on current QA processors. Our further studies in this work present future cost, throughput, and QA hardware trend considerations

    Overcoming CubeSat downlink limits with VITAMIN: a new variable coded modulation protocol

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2013Many space missions, including low earth orbit CubeSats, communicate in a highly dynamic environment because of variations in geometry, weather, and interference. At the same time, most missions communicate using fixed channel codes, modulations, and symbol rates, resulting in a constant data rate that does not adapt to the dynamic conditions. When conditions are good, the fixed date rate can be far below the theoretical maximum, called the Shannon limit; when conditions are bad, the fixed data rate may not work at all. To move beyond these fixed communications and achieve higher total data volume from emerging high-tech instruments, this thesis investigates the use of error correcting codes and different modulations. Variable coded modulation (VCM) takes advantage of the dynamic link by transmitting more information when the signal-to-noise ratio (SNR) is high. Likewise, VCM can throttle down the information rate when SNR is low without having to stop all communications. VCM outperforms fixed communications which can only operate at a fixed information rate as long as a certain signal threshold is met. This thesis presents a new VCM protocol and tests its performance in both software and hardware simulations. The protocol is geared towards CubeSat downlinks as complexity is focused in the receiver, while the transmission operations are kept simple. This thesis explores bin-packing as a way to optimize the selection of VCM modes based on expected SNR levels over time. Working end-to-end simulations were created using MATLAB and LabVIEW, while the hardware simulations were done with software defined radios. Results show that a CubeSat using VCM communications will deliver twice the data throughput of a fixed communications system

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&
    • …
    corecore