227 research outputs found
Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms
Mathematical programming is a branch of applied mathematics and has recently
been used to derive new decoding approaches, challenging established but often
heuristic algorithms based on iterative message passing. Concepts from
mathematical programming used in the context of decoding include linear,
integer, and nonlinear programming, network flows, notions of duality as well
as matroid and polyhedral theory. This survey article reviews and categorizes
decoding methods based on mathematical programming approaches for binary linear
codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory.
Published July 201
Efficient Maximum-Likelihood Decoding of Linear Block Codes on Binary Memoryless Channels
In this work, we consider efficient maximum-likelihood decoding of linear
block codes for small-to-moderate block lengths. The presented approach is a
branch-and-bound algorithm using the cutting-plane approach of Zhang and Siegel
(IEEE Trans. Inf. Theory, 2012) for obtaining lower bounds. We have compared
our proposed algorithm to the state-of-the-art commercial integer program
solver CPLEX, and for all considered codes our approach is faster for both low
and high signal-to-noise ratios. For instance, for the benchmark (155,64)
Tanner code our algorithm is more than 11 times as fast as CPLEX for an SNR of
1.0 dB on the additive white Gaussian noise channel. By a small modification,
our algorithm can be used to calculate the minimum distance, which we have
again verified to be much faster than using the CPLEX solver.Comment: Submitted to 2014 International Symposium on Information Theory. 5
Pages. Accepte
Introduction to Mathematical Programming-Based Error-Correction Decoding
Decoding error-correctiong codes by methods of mathematical optimization,
most importantly linear programming, has become an important alternative
approach to both algebraic and iterative decoding methods since its
introduction by Feldman et al. At first celebrated mainly for its analytical
powers, real-world applications of LP decoding are now within reach thanks to
most recent research. This document gives an elaborate introduction into both
mathematical optimization and coding theory as well as a review of the
contributions by which these two areas have found common ground.Comment: LaTeX sources maintained here: https://github.com/supermihi/lpdintr
Spectrally efficient multicarrier communication systems: signal detection, mathematical modelling and optimisation
This thesis considers theoretical, analytical and engineering design issues relating
to non-orthogonal Spectrally Efficient Frequency Division Multiplexing (SEFDM)
communication systems that exhibit significant spectral merits when compared to Orthogonal
FDM (OFDM) schemes. Alas, the practical implementation of such systems
raises significant challenges, with the receivers being the bottleneck.
This research explores detection of SEFDM signals. The mathematical foundations
of such signals lead to proposals of different orthonormalisation techniques as required
at the receivers of non-orthogonal FDM systems. To address SEFDM detection, two
approaches are considered: either attempt to solve the problem optimally by taking
advantage of special cases properties or to apply sub-optimal techniques that offer reduced
complexities at the expense of error rates degradation. Initially, the application
of sub-optimal linear detection techniques, such as Zero Forcing (ZF) and Minimum
Mean Squared Error (MMSE), is examined analytically and by detailed modelling. To
improve error performance a heuristic algorithm, based on a local search around an
MMSE estimate, is designed by combining MMSE with Maximum Likelihood (ML)
detection. Yet, this new method appears to be efficient for BPSK signals only. Hence,
various variants of the sphere decoder (SD) are investigated. A Tikhonov regularised
SD variant achieves an optimal solution for the detection of medium size signals in
low noise regimes. Detailed modelling shows the SD detector to be well suited to the
SEFDM detection, however, with complexity increasing with system interference and
noise. A new design of a detector that offers a good compromise between computational
complexity and error rate performance is proposed and tested through modelling
and simulation. Standard reformulation techniques are used to relax the original optimal
detection problem to a convex Semi-Definite Program (SDP) that can be solved
in polynomial time. Although SDP performs better than other linear relaxations, such
as ZF and MMSE, its deviation from optimality also increases with the deterioration
of the system inherent interference. To improve its performance a heuristic algorithm
based on a local search around the SDP estimate is further proposed. Finally, a modified
SD is designed to implement faster than the local search SDP concept. The new
method/algorithm, termed the pruned or constrained SD, achieves the detection of
realistic SEFDM signals in noisy environments
NengoFPGA: an FPGA Backend for the Nengo Neural Simulator
Low-power, high-speed neural networks are critical for providing deployable embedded AI
applications at the edge. We describe a Xilinx FPGA implementation of Neural Engineering
Framework (NEF) networks with online learning that outperforms mobile Nvidia GPU
implementations by an order of magnitude or more. Specifically, we provide an embedded
Python-capable PYNQ FPGA implementation supported with a Xilinx Vivado High-Level
Synthesis (HLS) workflow that allows sub-millisecond implementation of adaptive neural
networks with low-latency, direct I/O access to the physical world. The outcome of this
work is NengoFPGA, a seamless and user-friendly extension to the neural compiler Python
package Nengo. To reduce memory requirements and improve performance we tune the
precision of the different intermediate variables in the code to achieve competitive absolute
accuracy against slower and larger floating-point reference designs. The online learning
component of the neural network exploits immediate feedback to adjust the network weights
to best support a given arithmetic precision. As the space of possible design configurations
of such quantized networks is vast and is subject to a target accuracy constraint, we use
the Hyperopt hyper-parameter tuning tool instead of manual search to find Pareto optimal
designs. Specifically, we are able to generate the optimized designs in under 500 short
iterations of Vivado HLS C synthesis before running the complete Vivado place-and-route
phase on that subset, a much longer process not conducive to rapid exploration. For neural
network populations of 64–4096 neurons and 1–8 representational dimensions our optimized
FPGA implementation generated by Hyperopt has a speedup of 10–484× over a competing
cuBLAS implementation on the Jetson TX1 GPU while using 2.4–9.5× less power. Our
speedups are a result of HLS-specific reformulation (15× improvement), precision adaptation
(3× improvement), and low-latency direct I/O access (1000× improvement)
Fault tolerance in space-based digital signal processing and switching systems: Protecting up-link processing resources, demultiplexer, demodulator, and decoder
Fault tolerance features in the first three major subsystems appearing in the next generation of communications satellites are described. These satellites will contain extensive but efficient high-speed processing and switching capabilities to support the low signal strengths associated with very small aperture terminals. The terminals' numerous data channels are combined through frequency division multiplexing (FDM) on the up-links and are protected individually by forward error-correcting (FEC) binary convolutional codes. The front-end processing resources, demultiplexer, demodulators, and FEC decoders extract all data channels which are then switched individually, multiplexed, and remodulated before retransmission to earth terminals through narrow beam spot antennas. Algorithm based fault tolerance (ABFT) techniques, which relate real number parity values with data flows and operations, are used to protect the data processing operations. The additional checking features utilize resources that can be substituted for normal processing elements when resource reconfiguration is required to replace a failed unit
- …