70 research outputs found

    Linear-time encoding and decoding of low-density parity-check codes

    Get PDF
    Low-density parity-check (LDPC) codes had a renaissance when they were rediscovered in the 1990’s. Since then LDPC codes have been an important part of the field of error-correcting codes, and have been shown to be able to approach the Shannon capacity, the limit at which we can reliably transmit information over noisy channels. Following this, many modern communications standards have adopted LDPC codes. Error-correction is equally important in protecting data from corruption on a hard-drive as it is in deep-space communications. It is most commonly used for example for reliable wireless transmission of data to mobile devices. For practical purposes, both encoding and decoding need to be of low complexity to achieve high throughput and low power consumption. This thesis provides a literature review of the current state-of-the-art in encoding and decoding of LDPC codes. Message- passing decoders are still capable of achieving the best error-correcting performance, while more recently considered bit-flipping decoders are providing a low-complexity alternative, albeit with some loss in error-correcting performance. An implementation of a low-complexity stochastic bit-flipping decoder is also presented. It is implemented for Graphics Processing Units (GPUs) in a parallel fashion, providing a peak throughput of 1.2 Gb/s, which is significantly higher than previous decoder implementations on GPUs. The error-correcting performance of a range of decoders has also been tested, showing that the stochastic bit-flipping decoder provides relatively good error-correcting performance with low complexity. Finally, a brief comparison of encoding complexities for two code ensembles is also presented

    Some new results on majority-logic codes for correction of random errors

    Get PDF
    The main advantages of random error-correcting majority-logic codes and majority-logic decoding in general are well known and two-fold. Firstly, they offer a partial solution to a classical coding theory problem, that of decoder complexity. Secondly, a majority-logic decoder inherently corrects many more random error patterns than the minimum distance of the code implies is possible. The solution to the decoder complexity is only a partial one because there are circumstances under which a majority-logic decoder is too complex and expensive to implement. [Continues.

    Applications of microprocessors in digital high frequency radio communications

    Get PDF
    This thesis describes the application of VLSI devices to channel evaluation and communication techniques over ionospheric radio paths. Digital signal processing techniques using microprocessors and charge coupled devices are described in detail. A novel method for observing interference and fading patterns on HF channels is described. Error control coding schemes and digital modulation techniques are combined in a design for an adaptive modem for use over HF radio links. Results of narrow-band interference measurements, error patterns and coding performance are presented

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    On Coding and Detection Techniques for Two-Dimensional Magnetic Recording

    Get PDF
    Edited version embargoed until 15.04.2020 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 15/04/2019 by AS, Doctoral CollegeThe areal density growth of magnetic recording systems is fast approaching the superparamagnetic limit for conventional magnetic disks. This is due to the increasing demand for high data storage capacity. Two-dimensional Magnetic Recording (TDMR) is a new technology aimed at increasing the areal density of magnetic recording systems beyond the limit of current disk technology using conventional disk media. However, it relies on advanced coding and signal processing techniques to achieve areal density gains. Current state of the art signal processing for TDMR channel employed iterative decoding with Low Density Parity Check (LDPC) codes, coupled with 2D equalisers and full 2D Maximum Likelihood (ML) detectors. The shortcoming of these algorithms is their computation complexity especially with regards to the ML detectors which is exponential with respect to the number of bits involved. Therefore, robust low-complexity coding, equalisation and detection algorithms are crucial for successful future deployment of the TDMR scheme. This present work is aimed at finding efficient and low-complexity coding, equalisation, detection and decoding techniques for improving the performance of TDMR channel and magnetic recording channel in general. A forward error correction (FEC) scheme of two concatenated single parity bit systems along track separated by an interleaver has been presented for channel with perpendicular magnetic recording (PMR) media. Joint detection decoding algorithm using constrained MAP detector for simultaneous detection and decoding of data with single parity bit system has been proposed. It is shown that using the proposed FEC scheme with the constrained MAP detector/decoder can achieve a gain of up to 3dB over un-coded MAP decoder for 1D interference channel. A further gain of 1.5 dB was achieved by concatenating two interleavers with extra parity bit when data density along track is high. The use of single bit parity code as a run length limited code as well as an error correction code is demonstrated to simplify detection complexity and improve system performance. A low-complexity 2D detection technique for TDMR system with Shingled Magnetic Recording Media (SMR) was also proposed. The technique used the concatenation of 2D MAP detector along track with regular MAP detector across tracks to reduce the complexity order of using full 2D detection from exponential to linear. It is shown that using this technique can improve track density with limited complexity. Two methods of FEC for TDMR channel using two single parity bit systems have been discussed. One using two concatenated single parity bits along track only, separated by a Dithered Relative Prime (DRP) interleaver and the other use the single parity bits in both directions without the DRP interleaver. Consequent to the FEC coding on the channel, a 2D multi-track MAP joint detector decoder has been proposed for simultaneous detection and decoding of the coded single parity bit data. A gain of up to 5dB was achieved using the FEC scheme with the 2D multi-track MAP joint detector decoder over un-coded 2D multi-track MAP detector in TDMR channel. In a situation with high density in both directions, it is shown that FEC coding using two concatenated single parity bits along track separated by DRP interleaver performed better than when the single parity bits are used in both directions without the DRP interleaver.9mobile Nigeri

    Information reconciliation methods in secret key distribution

    Get PDF
    We consider in this thesis the problem of information reconciliation in the context of secret key distillation between two legitimate parties. In some scenarios of interest this problem can be advantageously solved with low density parity check (LDPC) codes optimized for the binary symmetric channel. In particular, we demonstrate that our method leads to a significant efficiency improvement, with respect to earlier interactive reconciliation methods. We propose a protocol based on LDPC codes that can be adapted to changes in the communication channel extending the original source. The efficiency of our protocol is only limited by the quality of the code and, while transmitting more information than needed to reconcile Alice’s and Bob’s sequences, it does not reveal any more information on the original source than an ad-hoc code would have revealed.---ABSTRACT---En esta tesis estudiamos el problema de la reconciliación de información en el contexto de la destilación de secreto entre dos partes. En algunos escenarios de interés, códigos de baja densidad de ecuaciones de paridad (LDPC) adaptados al canal binario simétrico ofrecen una buena solución al problema estudiado. Demostramos que nuestro método mejora significativamente la eficiencia de la reconciliación. Proponemos un protocolo basado en códigos LDPC que puede ser adaptado a cambios en el canal de comunicaciones mediante una extensión de la fuente original. La eficiencia de nuestro protocolo está limitada exclusivamente por el código utilizado y no revela información adicional sobre la fuente original que la que un código con la tasa de información adaptada habría revelado

    Architectures multi-Asip pour turbo récepteur flexible

    Get PDF
    Rapidly evolving wireless standards use modern techniques such as turbo codes, Bit Interleaved coded Modulation (BICM), high order QAM constellation, Signal Space Diversity (SSD), Multi-Input Multi-Output (MIMO) Spatial Multiplexing (SM) and Space Time Codes (STC) with different parameters for reliable high rate data transmissions. Adoption of such techniques in the transmitter can impact the receiver architecture in three ways: (1) the complex processing related to advanced techniques such as turbo codes, encourage to perform iterative processing in the receiver to improve error rate performance (2) to satisfy high throughput requirement for an iterative receiver, parallel processing is mandatory and finally (3) to allow the support of different techniques and parameters imposed, programmable yet high throughput hardware processing elements are required. In this thesis, to address the high throughput requirement with turbo processing, first of all a study of parallelism on turbo decoding is extended for turbo demodulation and turbo equalization. Based on the results acquired from the parallelism study a flexible high throughput heterogeneous multi-ASIP NoC based unified turbo receiver is proposed. The proposed architecture fulfils the target requirements in a way that: (a) Application Specific Instruction-set Processor (ASIP) exploits metric generation level parallelism and implements the required flexibility, (b) throughputs beyond the capacity of single ASIP in a turbo process are achieved through multiple ASIP elements implementing sub-block parallelism and shuffled processing and finally (c) Network on Chip is used to handle communication conflicts during parallel processing of multiple ASIPs. In pursuit to achieve a hardware model of the proposed architecture two ASIPs are conceived where the first one, namely EquASIP, is dedicated for MMSE-IC equalization and provides a flexible solution for multiple MIMO techniques adopted in multiple wireless standards with a capability to work in turbo equalization context. The second ASIP, named as DemASIP, is a flexible demapper which can be used in MIMO or single antenna environment for any modulation till 256-QAM with or without iterative demodulation. Using available TurbASIP and NoC components, the thesis concludes on an FPGA prototype of heterogeneous multi-ASIP NoC based unified turbo receiver which integrates 9 instances of 3 different ASIPs with 2 NoCs.Les normes de communication sans fil, sans cesse en évolution, imposent l'utilisation de techniques modernes telles que les turbocodes, modulation codée à entrelacement bit (BICM), constellation MAQ d'ordre élevé, diversité de constellation (SSD), multiplexage spatial et codage espace-temps multi-antennes (MIMO) avec des paramètres différents pour des transmissions fiables et de haut débit. L'adoption de ces techniques dans l'émetteur peut influencer l'architecture du récepteur de trois façons: (1) les traitement complexes relatifs aux techniques avancées comme les turbocodes, encourage à effectuer un traitement itératif dans le récepteur pour améliorer la performance en termes de taux d'erreur (2) pour satisfaire l'exigence de haut débit avec un récepteur itératif, le recours au parallélisme est obligatoire et enfin (3) pour assurer le support des différentes techniques et paramètres imposées, des processeurs de traitement matériel flexibles, mais aussi de haute performance, sont nécessaires. Dans cette thèse, pour répondre aux besoins de haut débit dans un contexte de traitement itératif, tout d'abord une étude de parallélisme sur le turbo décodage a été étendue aux applications de turbo démodulation et turbo égalisation. Partant des résultats obtenus à partir de l'étude du parallélisme, un récepteur itératif unifié basé sur un modèle d'architecture multi-ASIP hétérogène intégrant un réseau sur puce (NoC) a été proposé. L'architecture proposée répond aux exigences visées d'une manière où: (a) le concept de processeur à jeu d'instruction dédié à l'application (ASIP) exploite le parallélisme du niveau de génération de métriques et met en oeuvre la flexibilité nécessaire, (b) les débits au-delà de la capacité d'un seul ASIP dans un processus itératif sont obtenus au moyen de multiples ASIP implémentant le parallélisme de sous-blocs et le traitement combiné et enfin (c) le concept de réseau sur puce (NoC) est utilisé pour gérer les conflits de communication au cours du traitement parallèle itératif multi-ASIP. Dans le but de parvenir à un modèle matériel de l'architecture proposée, deux ASIP ont été conçus où le premier, nommé EquASIP, est dédié à l'égalisation MMSE-IC et fournit une solution flexible pour de multiples techniques multi-antennes adoptés dans plusieurs normes sans fil avec la capacité de travailler dans un contexte de turbo égalisation. Le deuxième ASIP, nommé DemASIP, est un démappeur flexible qui peut être utilisé dans un environnement multi-antennes et pour tout type de modulation jusqu'à MAQ-256 avec ou sans démodulation itérative. En intégrant ces ASIP, en plus des NoC et TurbASIP disponibles à Télécom Bretagne, la thèse conclut sur un prototype FPGA d'un récepteur itératif unifié multi-ASIP qui intègre 9 coeurs de 3 différents types d'ASIP avec 2 NoC

    Compression Methods for Structured Floating-Point Data and their Application in Climate Research

    Get PDF
    The use of new technologies, such as GPU boosters, have led to a dramatic increase in the computing power of High-Performance Computing (HPC) centres. This development, coupled with new climate models that can better utilise this computing power thanks to software development and internal design, led to the bottleneck moving from solving the differential equations describing Earth’s atmospheric interactions to actually storing the variables. The current approach to solving the storage problem is inadequate: either the number of variables to be stored is limited or the temporal resolution of the output is reduced. If it is subsequently determined that another vari- able is required which has not been saved, the simulation must run again. This thesis deals with the development of novel compression algorithms for structured floating-point data such as climate data so that they can be stored in full resolution. Compression is performed by decorrelation and subsequent coding of the data. The decorrelation step eliminates redundant information in the data. During coding, the actual compression takes place and the data is written to disk. A lossy compression algorithm additionally has an approx- imation step to unify the data for better coding. The approximation step reduces the complexity of the data for the subsequent coding, e.g. by using quantification. This work makes a new scientific contribution to each of the three steps described above. This thesis presents a novel lossy compression method for time-series data using an Auto Regressive Integrated Moving Average (ARIMA) model to decorrelate the data. In addition, the concept of information spaces and contexts is presented to use information across dimensions for decorrela- tion. Furthermore, a new coding scheme is described which reduces the weaknesses of the eXclusive-OR (XOR) difference calculation and achieves a better compression factor than current lossless compression methods for floating-point numbers. Finally, a modular framework is introduced that allows the creation of user-defined compression algorithms. The experiments presented in this thesis show that it is possible to in- crease the information content of lossily compressed time-series data by applying an adaptive compression technique which preserves selected data with higher precision. An analysis for lossless compression of these time- series has shown no success. However, the lossy ARIMA compression model proposed here is able to capture all relevant information. The reconstructed data can reproduce the time-series to such an extent that statistically rele- vant information for the description of climate dynamics is preserved. Experiments indicate that there is a significant dependence of the com- pression factor on the selected traversal sequence and the underlying data model. The influence of these structural dependencies on prediction-based compression methods is investigated in this thesis. For this purpose, the concept of Information Spaces (IS) is introduced. IS contributes to improv- ing the predictions of the individual predictors by nearly 10% on average. Perhaps more importantly, the standard deviation of compression results is on average 20% lower. Using IS provides better predictions and consistent compression results. Furthermore, it is shown that shifting the prediction and true value leads to a better compression factor with minimal additional computational costs. This allows the use of more resource-efficient prediction algorithms to achieve the same or better compression factor or higher throughput during compression or decompression. The coding scheme proposed here achieves a better compression factor than current state-of-the-art methods. Finally, this paper presents a modular framework for the development of compression algorithms. The framework supports the creation of user- defined predictors and offers functionalities such as the execution of bench- marks, the random subdivision of n-dimensional data, the quality evalua- tion of predictors, the creation of ensemble predictors and the execution of validity tests for sequential and parallel compression algorithms. This research was initiated because of the needs of climate science, but the application of its contributions is not limited to it. The results of this the- sis are of major benefit to develop and improve any compression algorithm for structured floating-point data

    Matlab

    Get PDF
    This book is a collection of 19 excellent works presenting different applications of several MATLAB tools that can be used for educational, scientific and engineering purposes. Chapters include tips and tricks for programming and developing Graphical User Interfaces (GUIs), power system analysis, control systems design, system modelling and simulations, parallel processing, optimization, signal and image processing, finite different solutions, geosciences and portfolio insurance. Thus, readers from a range of professional fields will benefit from its content
    corecore