41 research outputs found

    Spatial Statistical Data Fusion on Java-enabled Machines in Ubiquitous Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSN) consist of small, cheap devices that have a combination of sensing, computing and communication capabilities. They must be able to communicate and process data efficiently using minimum amount of energy and cover an area of interest with the minimum number of sensors. This thesis proposes the use of techniques that were designed for Geostatistics and applies them to WSN field. Kriging and Cokriging interpolation that can be considered as Information Fusion algorithms were tested to prove the feasibility of the methods to increase coverage. To reduce energy consumption, a compression method that models correlations based on variograms was developed. A second challenge is to establish the communication to the external networks and to react to unexpected events. A demonstrator that uses commercial Java-enabled devices was implemented. It is able to perform remote monitoring, send SMS alarms and deploy remote updates

    Efficient Information Reconciliation for Quantum Key Distribution = Reconciliación eficiente de información para la distribución cuántica de claves

    Full text link
    Advances in modern cryptography for secret-key agreement are driving the development of new methods and techniques in key distillation. Most of these developments, focusing on information reconciliation and privacy amplification, are for the direct benefit of quantum key distribution (QKD). In this context, information reconciliation has historically been done using heavily interactive protocols, i.e. with a high number of channel communications, such as the well-known Cascade. In this work we show how modern coding techniques can improve the performance of these methods for information reconciliation in QKD. Here, we propose the use of low-density parity-check (LDPC) codes, since they are good both in efficiency and throughput. A price to pay, a priori, using LDPC codes is that good efficiency is only attained for very long codes and in a very narrow range of error rates. This forces to use several codes in cases when the error rate varies significantly in different uses of the channel, a common situation for instance in QKD. To overcome these problems, this study examines various techniques for adapting LDPC codes, thus reducing the number of codes needed to cover the target range of error rates. These techniques are also used to improve the average efficiency of short-length LDPC codes based on a feedback coding scheme. The importance of short codes lies in the fact that they can be used for high throughput hardware implementations. In a further advancement, a protocol is proposed that avoids the a priori error rate estimation required in other approaches. This blind protocol also brings interesting implications to the finite key analysis. Los avances en la criptografía moderna para el acuerdo de clave secreta están empujando el desarrollo de nuevos métodos y técnicas para la destilación de claves. La mayoría de estos desarrollos, centrados en la reconciliación de información y la amplificación de privacidad, proporcionan un beneficio directo para la distribución cuántica de claves (QKD). En este contexto, la reconciliación de información se ha realizado históricamente por medio de protocolos altamente interativos, es decir, con un alto número de comunicaciones, tal y como ocurre con el protocolo Cascade. En este trabajo mostramos cómo las técnicas de codificación modernas pueden mejorar el rendimiento de estos métodos para la reconciliación de información en QKD. Proponemos el uso de códigos low-density parity-check (LDPC), puesto que estos son buenos tanto en eficiencia como en tasa de corrección. Un precio a pagar, a priori, utilizando códigos LDPC es que una buena eficiencia sólo se alcanza para códigos muy largos y en un rango de error limitado. Este hecho nos obliga a utilizar varios códigos en aquellos casos en los que la tasa de error varía significativamente para distintos usos del canal, una situación común por ejemplo en QKD. Para superar estos problemas, en este trabajo analizamos varias técnicas para la adaptación de códigos LDPC, y así poder reducir el número de códigos necesarios para cubrir el rango de errores deseado. Estas técnicas son también utilizadas para mejorar la eficiencia promedio de códigos LDPC cortos en un esquema de codificación con retroalimentación o realimentación (mensaje de retorno). El interés de los códigos cortos reside en el vii hecho de que estos pueden ser utilizados para implementaciones hardware de alto rendimiento. En un avance posterior, proponemos un nuevo protocolo que evita la estimación inicial de la tasa de error, requerida en otras propuestas. Este protocolo ciego también nos brinda implicaciones interesantes en el análisis de clave finita

    Capacity-Achieving Coding Mechanisms: Spatial Coupling and Group Symmetries

    Get PDF
    The broad theme of this work is in constructing optimal transmission mechanisms for a wide variety of communication systems. In particular, this dissertation provides a proof of threshold saturation for spatially-coupled codes, low-complexity capacity-achieving coding schemes for side-information problems, a proof that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels, and a mathematical framework to design delay sensitive communication systems. Spatially-coupled codes are a class of codes on graphs that are shown to achieve capacity universally over binary symmetric memoryless channels (BMS) under belief-propagation decoder. The underlying phenomenon behind spatial coupling, known as “threshold saturation via spatial coupling”, turns out to be general and this technique has been applied to a wide variety of systems. In this work, a proof of the threshold saturation phenomenon is provided for irregular low-density parity-check (LDPC) and low-density generator-matrix (LDGM) ensembles on BMS channels. This proof is far simpler than published alternative proofs and it remains as the only technique to handle irregular and LDGM codes. Also, low-complexity capacity-achieving codes are constructed for three coding problems via spatial coupling: 1) rate distortion with side-information, 2) channel coding with side-information, and 3) write-once memory system. All these schemes are based on spatially coupling compound LDGM/LDPC ensembles. Reed-Muller and Bose-Chaudhuri-Hocquengham (BCH) are well-known algebraic codes introduced more than 50 years ago. While these codes are studied extensively in the literature it wasn’t known whether these codes achieve capacity. This work introduces a technique to show that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels under maximum a posteriori (MAP) decoding. Instead of relying on the weight enumerators or other precise details of these codes, this technique requires that these codes have highly symmetric permutation groups. In fact, any sequence of linear codes with increasing blocklengths whose rates converge to a number between 0 and 1, and whose permutation groups are doubly transitive achieve capacity on erasure channels under bit-MAP decoding. This pro-vides a rare example in information theory where symmetry alone is sufficient to achieve capacity. While the channel capacity provides a useful benchmark for practical design, communication systems of the day also demand small latency and other link layer metrics. Such delay sensitive communication systems are studied in this work, where a mathematical framework is developed to provide insights into the optimal design of these systems

    Noise in Quantum Information Processing

    Get PDF
    Quantum phenomena such as superposition and entanglement imbue quantum systems with information processing power in excess of their classical counterparts. These properties of quantum states are, however, highly fragile. As we enter the era of noisy intermediate-scale quantum (NISQ) devices, this vulnerability to noise is a major hurdle to the experimental realisation of quantum technologies. In this thesis we explore the role of noise in quantum information processing from two different perspectives. In Part I we consider noise from the perspective of quantum error correcting codes. Error correcting codes are often analysed with respect to simplified toy models of noise, such as iid depolarising noise. We consider generalising these techniques for analysing codes under more realistic noise models, including features such as biased or correlated errors. We also consider designing customised codes which not only take into account and exploit features of the underlying physical noise. Considering such tailored codes will be of particular importance for NISQ applications in which finite-size effects can be significant. In Part II we apply tools from information theory to study the finite-resource effects which arise in the trade-offs between resource costs and error rates for certain quantum information processing tasks. We start by considering classical communication over quantum channels, providing a refined analysis of the trade-off between communication rate and error in the regime of a finite number of channel uses. We then extend these techniques to the problem of resource interconversion in theories such as quantum entanglement and quantum thermodynamics, studying finite-size effects which arise in resource-error trade-offs. By studying this effect in detail, we also show how detrimental finite-size effects in devices such as thermal engines may be greatly suppressed by carefully engineering the underlying resource interconversion processes

    Codes on Graphs and More

    Get PDF
    Modern communication systems strive to achieve reliable and efficient information transmission and storage with affordable complexity. Hence, efficient low-complexity channel codes providing low probabilities for erroneous receptions are needed. Interpreting codes as graphs and graphs as codes opens new perspectives for constructing such channel codes. Low-density parity-check (LDPC) codes are one of the most recent examples of codes defined on graphs, providing a better bit error probability than other block codes, given the same decoding complexity. After an introduction to coding theory, different graphical representations for channel codes are reviewed. Based on ideas from graph theory, new algorithms are introduced to iteratively search for LDPC block codes with large girth and to determine their minimum distance. In particular, new LDPC block codes of different rates and with girth up to 24 are presented. Woven convolutional codes are introduced as a generalization of graph-based codes and an asymptotic bound on their free distance, namely, the Costello lower bound, is proven. Moreover, promising examples of woven convolutional codes are given, including a rate 5/20 code with overall constraint length 67 and free distance 120. The remaining part of this dissertation focuses on basic properties of convolutional codes. First, a recurrent equation to determine a closed form expression of the exact decoding bit error probability for convolutional codes is presented. The obtained closed form expression is evaluated for various realizations of encoders, including rate 1/2 and 2/3 encoders, of as many as 16 states. Moreover, MacWilliams-type identities are revisited and a recursion for sequences of spectra of truncated as well as tailbitten convolutional codes and their duals is derived. Finally, the dissertation is concluded with exhaustive searches for convolutional codes of various rates with either optimum free distance or optimum distance profile, extending previously published results

    The Telecommunications and Data Aquisition Report

    Get PDF
    Tracking and ground-based navigation techniques are discussed in relation to DSN advanced systems. Network data processing and productivity are studied to improve management planning methods. Project activities for upgrading DSN facilities are presented

    Fault-tolerant quantum computer architectures using hierarchies of quantum error-correcting codes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 221-238).Quantum computers have been shown to efficiently solve a class of problems for which no efficient solution is otherwise known. Physical systems can implement quantum computation, but devising realistic schemes is an extremely challenging problem largely due to the effect of noise. A quantum computer that is capable of correctly solving problems more rapidly than modern digital computers requires some use of so-called fault-tolerant components. Code-based fault-tolerance using quantum error-correcting codes is one of the most promising and versatile of the known routes for fault-tolerant quantum computation. This dissertation presents three main, new results about code-based fault-tolerant quantum computer architectures. The first result is a large new family of quantum codes that go beyond stabilizer codes, the most well-studied family of quantum codes. Our new family of codeword stabilized codes contains all known codes with optimal parameters. Furthermore, we show how to systematically find, construct, and understand such codes as a pair of codes: an additive quantum code and a classical (nonlinear) code. Second, we resolve an open question about universality of so-called transversal gates acting on stabilizer codes. Such gates are universal for classical fault-tolerant computation, but they were conjectured to be insufficient for universal fault-tolerant quantum computation. We show that transversal gates have a restricted form and prove that some important families of them cannot be quantum universal. This is strong evidence that so-called quantum software is necessary to achieve universality, and, therefore, fault-tolerant quantum computer architecture is fundamentally different from classical computer architecture. Finally, we partition the fault-tolerant design problem into levels of a hierarchy of concatenated codes and present methods, compatible with rigorous threshold theorems, for numerically evaluating these codes.(cont.) The methods are applied to measure inner error-correcting code performance, as a first step toward elucidation of an effective fault-tolerant quantum computer architecture that uses no more than a physical, inner, and outer level of coding. Of the inner codes, the Golay code gives the highest pseudothreshold of 2 x 10-3. A comparison of logical error rate and overhead shows that the Bacon-Shor codes are competitive with Knill's C₄/C₆ scheme at a base error rate of 10⁻⁴.by Andrew W. Cross.Ph.D

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes

    Get PDF
    Edited version embargoed until 22.03.2019 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed. The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-½ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate ½ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown

    Resource optimization for fault-tolerant quantum computing

    Get PDF
    In this thesis we examine a variety of techniques for reducing the resources required for fault-tolerant quantum computation. First, we show how to simplify universal encoded computation by using only transversal gates and standard error correction procedures, circumventing existing no-go theorems. We then show how to simplify ancilla preparation, reducing the cost of error correction by more than a factor of four. Using this optimized ancilla preparation, we develop improved techniques for proving rigorous lower bounds on the noise threshold. Additional overhead can be incurred because quantum algorithms must be translated into sequences of gates that are actually available in the quantum computer. In particular, arbitrary single-qubit rotations must be decomposed into a discrete set of fault-tolerant gates. We find that by using a special class of non-deterministic circuits, the cost of decomposition can be reduced by as much as a factor of four over state-of-the-art techniques, which typically use deterministic circuits. Finally, we examine global optimization of fault-tolerant quantum circuits under physical connectivity constraints. We adapt techniques from VLSI in order to minimize time and space usage for computations in the surface code, and we develop a software prototype to demonstrate the potential savings.Comment: 231 pages, Ph.D. thesis, University of Waterlo
    corecore