5,565 research outputs found
Decoding Schemes for Foliated Sparse Quantum Error Correcting Codes
Foliated quantum codes are a resource for fault-tolerant measurement-based
quantum error correction for quantum repeaters and for quantum computation.
They represent a general approach to integrating a range of possible quantum
error correcting codes into larger fault-tolerant networks. Here we present an
efficient heuristic decoding scheme for foliated quantum codes, based on
message passing between primal and dual code 'sheets'. We test this decoder on
two different families of sparse quantum error correcting code: turbo codes and
bicycle codes, and show reasonably high numerical performance thresholds. We
also present a construction schedule for building such code states.Comment: 23 pages, 15 figures, accepted for publication in Phys. Rev.
An Iteratively Decodable Tensor Product Code with Application to Data Storage
The error pattern correcting code (EPCC) can be constructed to provide a
syndrome decoding table targeting the dominant error events of an inter-symbol
interference channel at the output of the Viterbi detector. For the size of the
syndrome table to be manageable and the list of possible error events to be
reasonable in size, the codeword length of EPCC needs to be short enough.
However, the rate of such a short length code will be too low for hard drive
applications. To accommodate the required large redundancy, it is possible to
record only a highly compressed function of the parity bits of EPCC's tensor
product with a symbol correcting code. In this paper, we show that the proposed
tensor error-pattern correcting code (T-EPCC) is linear time encodable and also
devise a low-complexity soft iterative decoding algorithm for EPCC's tensor
product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that
T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a
1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB
T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same
decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor
Product Code with Application to Data Storage
Joint morphological-lexical language modeling for processing morphologically rich languages with application to dialectal Arabic
Language modeling for an inflected language
such as Arabic poses new challenges for speech recognition and
machine translation due to its rich morphology. Rich morphology
results in large increases in out-of-vocabulary (OOV) rate and
poor language model parameter estimation in the absence of large
quantities of data. In this study, we present a joint
morphological-lexical language model (JMLLM) that takes
advantage of Arabic morphology. JMLLM combines
morphological segments with the underlying lexical items and
additional available information sources with regards to
morphological segments and lexical items in a single joint model.
Joint representation and modeling of morphological and lexical
items reduces the OOV rate and provides smooth probability
estimates while keeping the predictive power of whole words.
Speech recognition and machine translation experiments in
dialectal-Arabic show improvements over word and morpheme
based trigram language models. We also show that as the
tightness of integration between different information sources
increases, both speech recognition and machine translation
performances improve
Adaptive and Iterative Multi-Branch MMSE Decision Feedback Detection Algorithms for MIMO Systems
In this work, decision feedback (DF) detection algorithms based on multiple
processing branches for multi-input multi-output (MIMO) spatial multiplexing
systems are proposed. The proposed detector employs multiple cancellation
branches with receive filters that are obtained from a common matrix inverse
and achieves a performance close to the maximum likelihood detector (MLD).
Constrained minimum mean-squared error (MMSE) receive filters designed with
constraints on the shape and magnitude of the feedback filters for the
multi-branch MMSE DF (MB-MMSE-DF) receivers are presented. An adaptive
implementation of the proposed MB-MMSE-DF detector is developed along with a
recursive least squares-type algorithm for estimating the parameters of the
receive filters when the channel is time-varying. A soft-output version of the
MB-MMSE-DF detector is also proposed as a component of an iterative detection
and decoding receiver structure. A computational complexity analysis shows that
the MB-MMSE-DF detector does not require a significant additional complexity
over the conventional MMSE-DF detector, whereas a diversity analysis discusses
the diversity order achieved by the MB-MMSE-DF detector. Simulation results
show that the MB-MMSE-DF detector achieves a performance superior to existing
suboptimal detectors and close to the MLD, while requiring significantly lower
complexity.Comment: 10 figures, 3 tables; IEEE Transactions on Wireless Communications,
201
Towards robust real-world historical handwriting recognition
In this thesis, we make a bridge from the past to the future by using artificial-intelligence methods for text recognition in a historical Dutch collection of the Natuurkundige Commissie that explored Indonesia (1820-1850). In spite of the successes of systems like 'ChatGPT', reading historical handwriting is still quite challenging for AI. Whereas GPT-like methods work on digital texts, historical manuscripts are only available as an extremely diverse collections of (pixel) images. Despite the great results, current DL methods are very data greedy, time consuming, heavily dependent on the human expert from the humanities for labeling and require machine-learning experts for designing the models. Ideally, the use of deep learning methods should require minimal human effort, have an algorithm observe the evolution of the training process, and avoid inefficient use of the already sparse amount of labeled data. We present several approaches towards dealing with these problems, aiming to improve the robustness of current methods and to improve the autonomy in training. We applied our novel word and line text recognition approaches on nine data sets differing in time period, language, and difficulty: three locally collected historical Latin-based data sets from Naturalis, Leiden; four public Latin-based benchmark data sets for comparability with other approaches; and two Arabic data sets. Using ensemble voting of just five neural networks, a level of accuracy was achieved which required hundreds of neural networks in earlier studies. Moreover, we increased the speed of evaluation of each training epoch without the need of labeled data
An efficient minimum-distance decoding algorithm for convolutional error-correcting codes
Minimum-distance decoding of convolutional codes has generally been considered impractical for other than relatively short constraint length codes, because of the exponential growth in complexity with increasing constraint length. The minimum-distance decoding algorithm proposed in the paper, however, uses a sequential decoding approach to avoid an exponential growth in complexity with increasing constraint length, and also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of tree searching needed to find the minimum-distance path. In this way the algorithm achieves a complexity that does not grow exponentially with increasing constraint length, and is efficient for both long and short constraint length codes. The algorithm consists of two main processes. Firstly, a direct-mapping scheme, which automatically finds the minimum-distance path in a single mapping operation, is used to eliminate the need for all short back-up tree searches. Secondly, when a longer back-up search is required, an efficient tree-searching scheme is used to minimise the required search effort. The paper describes the complete algorithm and its theoretical basis, and examples of its operation are given
- …