27 research outputs found
Pearson Codes
The Pearson distance has been advocated for improving the error performance
of noisy channels with unknown gain and offset. The Pearson distance can only
fruitfully be used for sets of -ary codewords, called Pearson codes, that
satisfy specific properties. We will analyze constructions and properties of
optimal Pearson codes. We will compare the redundancy of optimal Pearson codes
with the redundancy of prior art -constrained codes, which consist of
-ary sequences in which pre-determined reference symbols appear at least
once. In particular, it will be shown that for the -constrained
codes are optimal Pearson codes, while for these codes are not
optimal.Comment: 17 pages. Minor revisions and corrections since previous version.
Author biographies added. To appear in IEEE Trans. Inform. Theor
Prefixless q-ary balanced codes with fast syndrome-based error correction
Abstract: We investigate a Knuth-like scheme for balancing q-ary codewords, which has the virtue that look-up tables for coding and decoding the prefix are avoided by using precoding and error correction techniques. We show how the scheme can be extended to allow for error correction of single channel errors using a fast decoding algorithm that depends on syndromes only, making it considerably faster compared to the prior art exhaustive decoding strategy. A comparison between the new and prior art schemes, both in terms of redundancy and error performance, completes the study
Iterative DNA Coding Scheme With GC Balance and Run-Length Constraints Using a Greedy Algorithm
In this paper, we propose a novel iterative encoding algorithm for DNA
storage to satisfy both the GC balance and run-length constraints using a
greedy algorithm. DNA strands with run-length more than three and the GC
balance ratio far from 50\% are known to be prone to errors. The proposed
encoding algorithm stores data at high information density with high
flexibility of run-length at most and GC balance between for
arbitrary and . More importantly, we propose a novel mapping method
to reduce the average bit error compared to the randomly generated mapping
method, using a greedy algorithm. The proposed algorithm is implemented through
iterative encoding, consisting of three main steps: randomization, M-ary
mapping, and verification. It has an information density of 1.8616 bits/nt in
the case of , which approaches the theoretical upper bound of 1.98
bits/nt, while satisfying two constraints. Also, the average bit error caused
by the one nt error is 2.3455 bits, which is reduced by , compared to
the randomized mapping.Comment: 19 page
L'algorithme de Pacman pour la construction efficace des codes équilibrés
Un bloc de bits est équilibré s’il contient un nombre de bits à 0 égal à celui des bits à 1. Les codes équilibrés (Balanced Codes, BC) sont largement appliqués dans plusieurs domaines. Par exemple, ils sont utilisés pour réduire le bruit dans les systèmes VLSI (Tabor, 1990). Dans le domaine de la télécommunication, ils sont utilisés dans la synchronisation et la transmission des données par fibre optique (Bergmann et al., 1986). Leur utilisation dans le domaine de l’identification par radiofréquence (RFID) permet d’augmenter les taux de transfert de données via les canaux RFID (Durgin, 2015). Étant donnée leur importance, plusieurs travaux de recherche ont été menés pour optimiser leur construction. Knuth fut le premier à trouver une méthode simple et rapide pour l’élaboration des codes équilibrés (Knuth, 1986). Il a introduit un algorithme très simple pour générer les codes équilibrés sans l’utilisation des tables de correspondance. Cependant, cet algorithme ajoute presque le double de la redondance minimale nécessaire pour la création des codes équilibrés. Une partie de cette redondance est due à la multiplicité d’encodage (ME) de cet algorithme. Plusieurs chercheurs ont essayé de réduire la redondance de l’algorithme de Knuth (Immink et Weber, 2009a, 2010; Immink et al., 2011; Al-Rababa’a et al., 2013). Dans les derniers travaux de Al-Rababa’a et al. (2013), les auteurs ont réussi à éliminer la redondance créée par ME et pourtant un écart par rapport au seuil minimal subsiste. Ce travail présente une alternative à l’algorithme de Knuth pour créer les codes équilibrés sans surplus de redondance. Nous proposons un algorithme nommé « Pacman » ¹ basé sur les permutations et les nombres entiers à précision limitée. En effet, le processus de codage de cet algorithme peut être assimilé à un Pacman qui consomme et produit des blocs d’informations d’une façon cyclique. Au cours de ce travail, nous allons montrer analytiquement et expérimentalement que la redondance introduite par notre technique est particulièrement faible, que les résultats sont nettement meilleurs que ceux des travaux antérieurs et que les complexités temporelle et spatiale utilisées sont linéaires. ¹. Inspiré de la marque de commerce PAC-MAN de l’entreprise BANDAI NAMCO.A block of m bits is said to be balanced if it contains an equal number of zeros and ones. Note that m has to be an even number. Balanced codes (BC) have applications in several domains. For example, they are used to reduce noise in VLSI systems (Tabor, 1990). In telecommunication, they are used in synchronization and data transmission by optical fibers (Bergmann et al., 1986). Their use in the field of radio frequency identification (RFID) can help to boost data transfer rate through RFID channels (Durgin, 2015). Given their importance, several research works have been carried out to optimize their construction. Knuth was the first to find a simple and fast method to create balanced codes (Knuth, 1986). He introduced a very simple algorithm to generate balanced codes without using lookup tables. However, Knuth’s balanced codes incur redundancy that is almost twice the one attested by the lower bound. A part of this redundancy is due to the multiplicity of encoding (ME) of this algorithm. Improvements to the Knuth’s algorithm are discussed in several research works (Immink et Weber, 2009a, 2010; Immink et al., 2011; Al-Rababa’a et al., 2013). In the last one (Al-Rababa’a et al., 2013), redundancy created by ME was eliminated and yet there is still some gap that needs to be closed. This work presents an alternative to Knuth’s algorithm for creating balanced codes without unwanted redundancy overhead. We propose an algorithm called "Pacman" ² that is based on permutations and limited-precision integers. Indeed, the coding process of this algorithm can be assimilated to a special Pacman that consumes and produces pills of information in a cyclical manner. In the presented work, we prove analytically and experimentally that our algorithm closes the mentioned redundancy gap while preserving a favorable compromise between calculation speed and memory consumption. ². Inspired by the trademark PAC-MAN of BANDAI NAMCO
Efficient Circuit-Level Implementation of Knuth-Based Balanced and Nearly-Balanced Codes
Coding schemes are often used in high-speed processor-processor or
processor-memory busses in digital systems. In particular, we have introduced
(in a 2012 DesignCon paper) a zero sum (ZS) signaling method which uses
balanced or nearly-balanced coding to reduce simultaneous switching noise (SSN)
in a single-ended bus to a level comparable to that of differential signaling.
While several balanced coding schemes are known, few papers exist that describe
the necessary digital hardware implementations of (known) balanced coding
schemes, and no algorithms had previously been developed for nearly-balanced
coding. In this work, we extend a known balanced coding scheme to accommodate
nearly-balanced coding and demonstrate a range of coding and decoding circuits
through synthesis in 65 nm CMOS. These hardware implementations have minimal
impact on the energy efficiency and area when compared to current
serializer/deserializers (SerDes) at clock rates which would support SerDes
integration.Comment: 23 pages, 12 figures, DesignCon 201