19 research outputs found

    Absorbing Set Analysis and Design of LDPC Codes from Transversal Designs over the AWGN Channel

    Full text link
    In this paper we construct low-density parity-check (LDPC) codes from transversal designs with low error-floors over the additive white Gaussian noise (AWGN) channel. The constructed codes are based on transversal designs that arise from sets of mutually orthogonal Latin squares (MOLS) with cyclic structure. For lowering the error-floors, our approach is twofold: First, we give an exhaustive classification of so-called absorbing sets that may occur in the factor graphs of the given codes. These purely combinatorial substructures are known to be the main cause of decoding errors in the error-floor region over the AWGN channel by decoding with the standard sum-product algorithm (SPA). Second, based on this classification, we exploit the specific structure of the presented codes to eliminate the most harmful absorbing sets and derive powerful constraints for the proper choice of code parameters in order to obtain codes with an optimized error-floor performance.Comment: 15 pages. arXiv admin note: text overlap with arXiv:1306.511

    Conception Avancée des codes LDPC binaires pour des applications pratiques

    Get PDF
    The design of binary LDPC codes with low error floors is still a significant problem not fully resolved in the literature. This thesis aims to design optimal/optimized binary LDPC codes. We have two main contributions to build the LDPC codes with low error floors. Our first contribution is an algorithm that enables the design of optimal QC-LDPC codes with maximum girth and mini-mum sizes. We show by simulations that our algorithm reaches the minimum bounds for regular QC-LDPC codes (3, d c ) with low d c . Our second contribution is an algorithm that allows the design optimized of regular LDPC codes by minimizing dominant trapping-sets/expansion-sets. This minimization is performed by a predictive detection of dominant trapping-sets/expansion-sets defined for a regular code C(d v , d c ) of girth g t . By simulations on different rate codes, we show that the codes designed by minimizing dominant trapping-sets/expansion-sets have better performance than the designed codes without taking account of trapping-sets/expansion-sets. The algorithms we proposed are based on the generalized RandPEG. These algorithms take into account non-cycles seen in the case of quasi-cyclic codes to ensure the predictions.La conception de codes LDPC binaires avec un faible plancher d’erreurs est encore un problème considérable non entièrement résolu dans la littérature. Cette thèse a pour objectif la conception optimale/optimisée de codes LDPC binaires. Nous avons deux contributions principales pour la construction de codes LDPC à faible plancher d’erreurs. Notre première contribution est un algorithme qui permet de concevoir des codes QC-LDPC optimaux à large girth avec les tailles minimales. Nous montrons par des simulations que notre algorithme atteint les bornes minimales fixées pour les codes QC-LDPC réguliers (3, d c ) avec d c faible. Notre deuxième contribution est un algorithme qui permet la conception optimisée des codes LDPC réguliers en minimisant les trapping-sets/expansion-sets dominants(es). Cette minimisation s’effectue par une détection prédictive des trapping-sets/expansion-sets dominants(es) définies pour un code régulier C(d v , d c ) de girth gt . Par simulations sur des codes de rendement différent, nous montrons que les codes conçus en minimisant les trapping-sets/expansion-sets dominants(es) ont de meilleures performances que les codes conçus sans la prise en compte des trapping-sets/expansion-sets. Les algorithmes que nous avons proposés se basent sur le RandPEG généralisé. Ces algorithmes prennent en compte les cycles non-vus dans le cas des codes quasi-cycliques pour garantir les prédictions

    Design and Analysis of GFDM-Based Wireless Communication Systems

    Get PDF
    Le multiplexage généralisé par répartition en fréquence (GFDM), une méthode de traitement par blocs de modulation multiporteuses non orthogonales, est une candidate prometteuse pour les technologies de forme d'onde pour les systèmes sans fil au-delà de la cinquième génération (5G). La capacité du GFDM à ajuster de manière flexible la taille du bloc et le type de filtres de mise en forme des impulsions en fait une méthode appropriée pour répondre à plusieurs exigences importantes, comme une faible latence, un faible rayonnement hors bande (OOB) et des débits de données élevés. En appliquant aux systèmes GFDM la technique des systèmes à entrées multiples et sorties multiples (MIMO), la technique de MIMO massif ou des codes de contrôle de parité à faible densité (LDPC), il est possible d'améliorer leurs performances. Par conséquent, l'étude de ces systèmes combinés sont d'une grande importance théorique et pratique. Dans cette thèse, nous étudions les systèmes de communication sans fil basés sur le GFDM en considérant trois aspects. Tout d'abord, nous dérivons une borne d'union sur le taux d'erreur sur les bits (BER) pour les systèmes MIMO-GFDM, technique qui est basée sur des probabilités d'erreur par paires exactes (PEP). La PEP exacte est calculée en utilisant la fonction génératrice de moments(MGF) pour les détecteurs à maximum de vraisemblance (ML). La corrélation spatiale entre les antennes et les erreurs d'estimation de canal sont prises en compte dans l'environnement de canal étudié. Deuxièmement, les estimateurs et les précodeurs de canal de faible complexité basés sur une expansion polynomiale sont proposés pour les systèmes MIMO-GFDM massifs. Des pilotes sans interférence sont utilisés pour l'estimation du canal basée sur l'erreur quadratique moyenne minimale(MMSE) pour lutter contre l'influence de la non-orthogonalité entre les sous-porteuses dans le GFDM. La complexité de calcul cubique peut être réduite à une complexité d'ordre au carré en utilisant la technique d'expansion polynomiale pour approximer les inverses de matrices dans l'estimation MMSE conventionnelle et le précodage. De plus, nous calculons les limites de performance en termes d'erreur quadratique moyenne (MSE) pour les estimateurs proposés, ce qui peut être un outil utile pour prédire la performance des estimateurs dans la région de Eₛ/N₀ élevé. Une borne inférieure de Cramér-Rao(CRLB) est dérivée pour notre modèle de système et agit comme une référence pour les estimateurs. La complexité de calcul des estimateurs de canal proposés et des précodeurs et les impacts du degré du polynôme sont également étudiés. Enfin, nous analysons les performances de la probabilité d'erreur des systèmes GFDM combinés aux codes LDPC. Nous dérivons d'abord les expressions du ratio de vraisemblance logarithmique (LLR) initiale qui sont utilisées dans le décodeur de l'algorithme de somme de produits (SPA). Ensuite, basé sur le seuil de décodage, nous estimons le taux d'erreur de trame (FER) dans la région de bas E[indice b]/N₀ en utilisant le BER observé pour modéliser les variations du canal. De plus, une borne inférieure du FER du système est également proposée basée sur des ensembles absorbants. Cette borne inférieure peut agir comme une estimation du FER dans la région de E[indice b]/N₀ élevé si l'ensemble absorbant utilisé est dominant et que sa multiplicité est connue. La quantification a également un impact important sur les performances du FER et du BER. Des codes LDPC basés sur un tableau et construit aléatoirement sont utilisés pour supporter les analyses de performances. Pour ces trois aspects, des simulations et des calculs informatiques sont effectués pour obtenir des résultats numériques connexes, qui vérifient les méthodes proposées.8 372162\u a Generalized frequency division multiplexing (GFDM) is a block-processing based non-orthogonal multi-carrier modulation scheme, which is a promising candidate waveform technology for beyond fifth-generation (5G) wireless systems. The ability of GFDM to flexibly adjust the block size and the type of pulse-shaping filters makes it a suitable scheme to meet several important requirements, such as low latency, low out-of-band (OOB) radiation and high data rates. Applying the multiple-input multiple-output (MIMO) technique, the massive MIMO technique, or low-density parity-check (LDPC) codes to GFDM systems can further improve the systems performance. Therefore, the investigation of such combined systems is of great theoretical and practical importance. This thesis investigates GFDM-based wireless communication systems from the following three aspects. First, we derive a union bound on the bit error rate (BER) for MIMO-GFDM systems, which is based on exact pairwise error probabilities (PEPs). The exact PEP is calculated using the moment-generating function (MGF) for maximum likelihood (ML) detectors. Both the spatial correlation between antennas and the channel estimation errors are considered in the investigated channel environment. Second, polynomial expansion-based low-complexity channel estimators and precoders are proposed for massive MIMO-GFDM systems. Interference-free pilots are used in the minimum mean square error (MMSE) channel estimation to combat the influence of non-orthogonality between subcarriers in GFDM. The cubic computational complexity can be reduced to square order by using the polynomial expansion technique to approximate the matrix inverses in the conventional MMSE estimation and precoding. In addition, we derive performance limits in terms of the mean square error (MSE) for the proposed estimators, which can be a useful tool to predict estimators performance in the high Eₛ/N₀ region. A Cramér-Rao lower bound (CRLB) is derived for our system model and acts as a benchmark for the estimators. The computational complexity of the proposed channel estimators and precoders, and the impacts of the polynomial degree are also investigated. Finally, we analyze the error probability performance of LDPC coded GFDM systems. We first derive the initial log-likelihood ratio (LLR) expressions that are used in the sum-product algorithm (SPA) decoder. Then, based on the decoding threshold, we estimate the frame error rate (FER) in the low E[subscript b]/N₀ region by using the observed BER to model the channel variations. In addition, a lower bound on the FER of the system is also proposed based on absorbing sets. This lower bound can act as an estimate of the FER in the high E[subscript b]/N₀ region if the absorbing set used is dominant and its multiplicity is known. The quantization scheme also has an important impact on the FER and BER performances. Randomly constructed and array-based LDPC codes are used to support the performance analyses. For all these three aspects, software-based simulations and calculations are carried out to obtain related numerical results, which verify our proposed methods

    Selected Papers from the First International Symposium on Future ICT (Future-ICT 2019) in Conjunction with 4th International Symposium on Mobile Internet Security (MobiSec 2019)

    Get PDF
    The International Symposium on Future ICT (Future-ICT 2019) in conjunction with the 4th International Symposium on Mobile Internet Security (MobiSec 2019) was held on 17–19 October 2019 in Taichung, Taiwan. The symposium provided academic and industry professionals an opportunity to discuss the latest issues and progress in advancing smart applications based on future ICT and its relative security. The symposium aimed to publish high-quality papers strictly related to the various theories and practical applications concerning advanced smart applications, future ICT, and related communications and networks. It was expected that the symposium and its publications would be a trigger for further related research and technology improvements in this field

    Coding approaches to fault tolerance in dynamic systems

    Get PDF
    Also issued as Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 189-196).Sponsored through a contract with Sanders, A Lockheed Martin Company.Christoforos N. Hadjicostis

    ベイジアン認知ランキング方法と消費者データ解析及び分子生物情報学への応用

    Get PDF
    筑波大学 (University of Tsukuba)201

    Controlling LDPC Absorbing Sets via the Null Space of the Cycle Consistency Matrix

    No full text
    Abstract — This paper focuses on controlling absorbing sets for a class of regular LDPC codes, known as separable, circulantbased (SCB) codes. For a specified circulant matrix, SCB codes all share a common mother matrix and include array-based LDPC codes and many common quasi-cyclic codes. SCB codes retain standard properties of quasi-cyclic LDPC codes such as girth, code structure, and compatibility with existing high-throughput hardware implementations. This paper uses a cycle consistency matrix (CCM) for each absorbing set of interest in an SCB LDPC code. For an absorbing set to be present in an SCB LDPC code, the associated CCM must not be full columnrank. Our approach selects rows and columns from the SCB mother matrix to systematically eliminate dominant absorbing sets by forcing the associated CCMs to be full column-rank. Simulation results demonstrate that the new codes have steeper error-floor slopes and provide at least one order of magnitude of improvement in the low FER region. Identifying absorbingset-spectrum equivalence classes within the family of SCB codes with a specified circulant matrix significantly reduces the search space of possible code matrices. I

    Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 241-257).Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.by Jason K. Johnson.Ph.D
    corecore