24 research outputs found

    Asymmetric Error Correction and Flash-Memory Rewriting using Polar Codes

    Get PDF
    We propose efficient coding schemes for two communication settings: 1. asymmetric channels, and 2. channels with an informed encoder. These settings are important in non-volatile memories, as well as optical and broadcast communication. The schemes are based on non-linear polar codes, and they build on and improve recent work on these settings. In asymmetric channels, we tackle the exponential storage requirement of previously known schemes, that resulted from the use of large Boolean functions. We propose an improved scheme, that achieves the capacity of asymmetric channels with polynomial computational complexity and storage requirement. The proposed non-linear scheme is then generalized to the setting of channel coding with an informed encoder, using a multicoding technique. We consider specific instances of the scheme for flash memories, that incorporate error-correction capabilities together with rewriting. Since the considered codes are non-linear, they eliminate the requirement of previously known schemes (called polar write-once-memory codes) for shared randomness between the encoder and the decoder. Finally, we mention that the multicoding scheme is also useful for broadcast communication in Marton's region, improving upon previous schemes for this setting.Comment: Submitted to IEEE Transactions on Information Theory. Partially presented at ISIT 201

    Physical layer network coding based on compute-and-forward

    Get PDF
    In this thesis, Compute-and-Forward is considered, where the system model consists of multiple users and a single base station. Compute-and-Forward is a type of lattice network coding which is deemed to avoid backhaul load and is therefore an important aspect of modern wireless communications networks. Initially we propose an implementation of construction D into Compute-and-Forward and investigate the implementation of multilayer lattice encoding and decoding strategies. Here we show that adopting a construction D lattice we can implement a practical lattice decoder in Compute-and-Forward. During this investigation and implementation of multilayer lattice encoding and decoding we discover an error floor due to an interaction between code layers in the multilayer decoder. We analyse and describe this interaction with mathematical expressions and give detail using lemmas and proofs. Secondly, we demonstrate the BER performance of the system model for unit valued channels, integer valued channels and complex integer valued channels. We show that using the derived expressions for interaction that the decoders on each code layer are able to indeed decode. The BER results are demonstrated for two scenarios using zero order and second order Reed-Muller codes and first and third order Reed-Muller codes. Finally, we extend our system model using construction D and existing conventional decoders to include coefficient selection algorithms. We employ an exhaustive search algorithm and analyse the throughput performance of the codes. Again, we extend this to both our models. With the throughput of the codes we see that each layer can be successfully decoded considering the interaction expressions. The purpose of the performance results is to show decodability with the extension of using differing codes

    Improved Diagnostics & Performance for Quantum Error Correction

    Get PDF
    Building large scale quantum computers is one of the most exciting ventures being pursued by researchers in the 21st century. However, the presence of noise in quantum systems poses a major hindrance towards this ambitious goal. Unlike the developmental history of classical computers where noise levels were brought under reasonable threshold levels early on, the field of quantum computing is struggling to do the same. Nonetheless, there have been many significant theoretical and experimental advancements in the past decade. Quantum error correction and fault tolerance in general is believed to be a reliable long term strategy to mitigate noise and perform arbitrarily long quantum computations. Optimizing and assessing the quality of components in fault-tolerance scheme is a crucial task. We address these tasks in this thesis. In the first part of the thesis, we provide a method to efficiently estimate the performance of a large class of codes called concatenated stabilizer codes. We show how to employ noise tailoring techniques developed for computations at the physical level to circuits protected by quantum error correction to enable this estimation. We also develop a metric called the logical estimator, which is an approximation of the logical infidelity of the code. We show that this metric can be used to guide the selection of the optimal (concatenated stabilizer) code and the optimal (lookup style) decoder for a given device. Moreover, the metric also aids in estimating the resource requirements for a target logical error rate efficiently and reliably. In the second part, we show how a combination of noise tailoring tools with quantum error correction can improve the performance of concatenated stabilizer codes by several orders of magnitude. These gains in turn bring down the resource overheads for quantum error correction. We explore the gains using concatenated Steane code under a wide variety of physically motivated error models including arbitrary rotations and combinations of coherent and stochastic noise. We also study the variation of gains with the number of levels of concatenation. For the simple case of rotations about a Pauli axis, we show that the gain scales doubly exponentially with the number of levels in the code. We analyze and show the presence of threshold rotation angles below which the gains can be arbitrarily magnified by increasing the number of levels in the code. The last part of the thesis explores the testing of an important property of error correcting codes - the minimum distance, often referred to as the distance. We operate in the regime of large classical binary linear codes described in terms of their parity check matrices. We are given access to these codes in terms of an oracle which when supplied an index, returns a single column of the parity check matrix corresponding to that index. We derive lower and upper bounds on the query complexity of finding the minimum distance of a given code. We also ask and (partially) answer the same question in the property testing framework. In particular, we provide a tester which queries a sub-linear number of columns of the parity check matrix and certifies whether a code has high distance or is far away from all codes which have high distance. We also provide non-trivial lower bounds for this task. Although this study is done for classical linear codes, it has implications for designing quantum codes which are built using classical codes. This part of the thesis defines the beginning of a significant area of interest encompassing efficiently testing important properties of classical and quantum codes

    General quantum algorithms for Hamiltonian simulation with applications to a non-Abelian lattice gauge theory

    Get PDF
    With a focus on universal quantum computing for quantum simulation, and through the example of lattice gauge theories, we introduce rather general quantum algorithms that can efficiently simulate certain classes of interactions consisting of correlated changes in multiple (bosonic and fermionic) quantum numbers with non-trivial functional coefficients. In particular, we analyze diagonalization of Hamiltonian terms using a singular-value decomposition technique, and discuss how the achieved diagonal unitaries in the digitized time-evolution operator can be implemented. The lattice gauge theory studied is the SU(2) gauge theory in 1+1 dimensions coupled to one flavor of staggered fermions, for which a complete quantum-resource analysis within different computational models is presented. The algorithms are shown to be applicable to higher-dimensional theories as well as to other Abelian and non-Abelian gauge theories. The example chosen further demonstrates the importance of adopting efficient theoretical formulations: it is shown that an explicitly gauge-invariant formulation using loop, string, and hadron (LSH) degrees of freedom simplifies the algorithms and lowers the cost compared with the standard formulations based on angular-momentum as well as the Schwinger-boson degrees of freedom. The LSH formulation further retains the non-Abelian gauge symmetry despite the inexactness of the digitized simulation, without the need for costly controlled operations. Such theoretical and algorithmic considerations are likely to be essential in quantum simulating other complex theories of relevance to nature.Comment: 59+17+7 pages, 16 figure

    Trellis Decoding And Applications For Quantum Error Correction

    Get PDF
    Compact, graphical representations of error-correcting codes called trellises are a crucial tool in classical coding theory, establishing both theoretical properties and performance metrics for practical use. The idea was extended to quantum error-correcting codes by Ollivier and Tillich in 2005. Here, we use their foundation to establish a practical decoder able to compute the maximum-likely error for any stabilizer code over a finite field of prime dimension. We define a canonical form for the stabilizer group and use it to classify the internal structure of the graph. Similarities and differences between the classical and quantum theories are discussed throughout. Numerical results are presented which match or outperform current state-of-the-art decoding techniques. New construction techniques for large trellises are developed and practical implementations discussed. We then define a dual trellis and use algebraic graph theory to solve the maximum-likely coset problem for any stabilizer code over a finite field of prime dimension at minimum added cost. Classical trellis theory makes occasional theoretical use of a graph product called the trellis product. We establish the relationship between the trellis product and the standard graph products and use it to provide a closed form expression for the resulting graph, allowing it to be used in practice. We explore its properties and classify all idempotents. The special structure of the trellis allows us to present a factorization procedure for the product, which is much simpler than that of the standard products. Finally, we turn to an algorithmic study of the trellis and explore what coding-theoretic information can be extracted assuming no other information about the code is available. In the process, we present a state-of-the-art algorithm for computing the minimum distance for any stabilizer code over a finite field of prime dimension. We also define a new weight enumerator for stabilizer codes over F_2 incorporating the phases of each stabilizer and provide a trellis-based algorithm to compute it.Ph.D

    Anticodes and error-correcting for digital data transmission

    Get PDF
    The work reported in this thesis is an investigation in the field of error-control coding. This subject is concerned with increasing the reliability of digital data transmission through a noisy medium, by coding the transmitted data. In this respect, an extension and development of a method for finding optimum and near-optimum codes, using N.m digital arrays known as anticodes, is established and described. The anticodes, which have opposite properties to their complementary related error-control codes, are disjoined fron the original maximal-length code, known as the parent anticode, to leave good linear block codes. The mathematical analysis of the parent anticode and as a result the mathematical analysis of its related anticodes has given some useful insight into the construction of a large number of optimum and near-optimum anticodes resulting respectively in a large number of optimum and near-optimum codes. This work has been devoted to the construction of anticodes from unit basic (small dimension) anticodes by means of various systematic construction and refinement techniques, which simplifies the construction of the associated linear block codes over a wide range of parameters. An extensive list of these anticodes and codes is given in the thesis. The work also has been extended to the construction of anticodes in which the symbols have been chosen from the elements of the finite field GF(q), and, in particular, a large number of optimum and near-optimum codes over GF(3) have been found. This generalises the concept of anticodes into the subject of multilevel codes

    Design of tch-type sequences for communications

    Get PDF
    This thesis deals with the design of a class of cyclic codes inspired by TCH codewords. Since TCH codes are linked to finite fields the fundamental concepts and facts about abstract algebra, namely group theory and number theory, constitute the first part of the thesis. By exploring group geometric properties and identifying an equivalence between some operations on codes and the symmetries of the dihedral group we were able to simplify the generation of codewords thus saving on the necessary number of computations. Moreover, we also presented an algebraic method to obtain binary generalized TCH codewords of length N = 2k, k = 1,2, . . . , 16. By exploring Zech logarithm’s properties as well as a group theoretic isomorphism we developed a method that is both faster and less complex than what was proposed before. In addition, it is valid for all relevant cases relating the codeword length N and not only those resulting from N = p

    Time diversity solutions to cope with lost packets

    Get PDF
    A dissertation submitted to Departamento de Engenharia Electrotécnica of Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engenharia Electrotécnica e de ComputadoresModern broadband wireless systems require high throughputs and can also have very high Quality-of-Service (QoS) requirements, namely small error rates and short delays. A high spectral efficiency is needed to meet these requirements. Lost packets, either due to errors or collisions, are usually discarded and need to be retransmitted, leading to performance degradation. An alternative to simple retransmission that can improve both power and spectral efficiency is to combine the signals associated to different transmission attempts. This thesis analyses two time diversity approaches to cope with lost packets that are relatively similar at physical layer but handle different packet loss causes. The first is a lowcomplexity Diversity-Combining (DC) Automatic Repeat reQuest (ARQ) scheme employed in a Time Division Multiple Access (TDMA) architecture, adapted for channels dedicated to a single user. The second is a Network-assisted Diversity Multiple Access (NDMA) scheme, which is a multi-packet detection approach able to separate multiple mobile terminals transmitting simultaneously in one slot using temporal diversity. This thesis combines these techniques with Single Carrier with Frequency Division Equalizer (SC-FDE) systems, which are widely recognized as the best candidates for the uplink of future broadband wireless systems. It proposes a new NDMA scheme capable of handling more Mobile Terminals (MTs) than the user separation capacity of the receiver. This thesis also proposes a set of analytical tools that can be used to analyse and optimize the use of these two systems. These tools are then employed to compare both approaches in terms of error rate, throughput and delay performances, and taking the implementation complexity into consideration. Finally, it is shown that both approaches represent viable solutions for future broadband wireless communications complementing each other.Fundação para a Ciência e Tecnologia - PhD grant(SFRH/BD/41515/2007); CTS multi-annual funding project PEst-OE/EEI/UI0066/2011, IT pluri-annual funding project PEst-OE/EEI/LA0008/2011, U-BOAT project PTDC/EEATEL/ 67066/2006, MPSat project PTDC/EEA-TEL/099074/2008 and OPPORTUNISTICCR project PTDC/EEA-TEL/115981/200

    Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije

    Get PDF
    Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard
    corecore