165 research outputs found

    Serially concatenated unity-rate codes improve quantum codes without coding-rate reduction

    No full text
    Inspired by the astounding performance of the unity rate code (URC) aided classical coding and detection schemes, we conceive a quantum URC (QURC) for assisting the design of concatenated quantum codes. Unfortunately, a QURC cannot be simultaneously recursive as well as non-catastrophic. However, we demonstrate that, despite being non-recursive, our proposed QURC yields efficient concatenated codes, which exhibit a low error rate and a beneficial interleaver gain, provided that the coding scheme is carefully designed with the aid of EXtrinsic Information Transfer (EXIT) charts

    TurboRVB: A many-body toolkit for ab initio electronic simulations by quantum Monte Carlo

    Get PDF
    TurboRVB is a computational package for ab initio Quantum Monte Carlo (QMC) simulations of both molecular and bulk electronic systems. The code implements two types of well established QMC algorithms: Variational Monte Carlo (VMC) and diffusion Monte Carlo in its robust and efficient lattice regularized variant. A key feature of the code is the possibility of using strongly correlated many-body wave functions (WFs), capable of describing several materials with very high accuracy, even when standard mean-field approaches [e.g., density functional theory (DFT)] fail. The electronic WF is obtained by applying a Jastrow factor, which takes into account dynamical correlations, to the most general mean-field ground state, written either as an antisymmetrized geminal power with spin-singlet pairing or as a Pfaffian, including both singlet and triplet correlations. This WF can be viewed as an efficient implementation of the so-called resonating valence bond (RVB) Ansatz, first proposed by Pauling and Anderson in quantum chemistry [L. Pauling, The Nature of the Chemical Bond (Cornell University Press, 1960)] and condensed matter physics [P.W. Anderson, Mat. Res. Bull 8, 153 (1973)], respectively. The RVB Ansatz implemented in TurboRVB has a large variational freedom, including the Jastrow correlated Slater determinant as its simplest, but nontrivial case. Moreover, it has the remarkable advantage of remaining with an affordable computational cost, proportional to the one spent for the evaluation of a single Slater determinant. Therefore, its application to large systems is computationally feasible. The WF is expanded in a localized basis set. Several basis set functions are implemented, such as Gaussian, Slater, and mixed types, with no restriction on the choice of their contraction. The code implements the adjoint algorithmic differentiation that enables a very efficient evaluation of energy derivatives, comprising the ionic forces. Thus, one can perform structural optimizations and molecular dynamics in the canonical NVT ensemble at the VMC level. For the electronic part, a full WF optimization (Jastrow and antisymmetric parts together) is made possible, thanks to state-of-the-art stochastic algorithms for energy minimization. In the optimization procedure, the first guess can be obtained at the mean-field level by a built-in DFT driver. The code has been efficiently parallelized by using a hybrid MPI-OpenMP protocol, which is also an ideal environment for exploiting the computational power of modern Graphics Processing Unit accelerators

    Applications of finite geometries to designs and codes

    Get PDF
    This dissertation concerns the intersection of three areas of discrete mathematics: finite geometries, design theory, and coding theory. The central theme is the power of finite geometry designs, which are constructed from the points and t-dimensional subspaces of a projective or affine geometry. We use these designs to construct and analyze combinatorial objects which inherit their best properties from these geometric structures. A central question in the study of finite geometry designs is Hamada’s conjecture, which proposes that finite geometry designs are the unique designs with minimum p-rank among all designs with the same parameters. In this dissertation, we will examine several questions related to Hamada’s conjecture, including the existence of counterexamples. We will also study the applicability of certain decoding methods to known counterexamples. We begin by constructing an infinite family of counterexamples to Hamada’s conjecture. These designs are the first infinite class of counterexamples for the affine case of Hamada’s conjecture. We further demonstrate how these designs, along with the projective polarity designs of Jungnickel and Tonchev, admit majority-logic decoding schemes. The codes obtained from these polarity designs attain error-correcting performance which is, in certain cases, equal to that of the finite geometry designs from which they are derived. This further demonstrates the highly geometric structure maintained by these designs. Finite geometries also help us construct several types of quantum error-correcting codes. We use relatives of finite geometry designs to construct infinite families of q-ary quantum stabilizer codes. We also construct entanglement-assisted quantum error-correcting codes (EAQECCs) which admit a particularly efficient and effective error-correcting scheme, while also providing the first general method for constructing these quantum codes with known parameters and desirable properties. Finite geometry designs are used to give exceptional examples of these codes

    Information-theoretic analysis of a family of additive energy channels

    Get PDF
    This dissertation studies a new family of channel models for non-coherent com- munications, the additive energy channels. By construction, the additive en- ergy channels occupy an intermediate region between two widely used channel models: the discrete-time Gaussian channel, used to represent coherent com- munication systems operating at radio and microwave frequencies, and the discrete-time Poisson channel, which often appears in the analysis of intensity- modulated systems working at optical frequencies. The additive energy chan- nels share with the Gaussian channel the additivity between a useful signal and a noise component. However, the signal and noise components are not complex- valued quadrature amplitudes but, as in the Poisson channel, non-negative real numbers, the energy or squared modulus of the complex amplitude. The additive energy channels come in two variants, depending on whether the channel output is discrete or continuous. In the former case, the energy is a multiple of a fundamental unit, the quantum of energy, whereas in the second the value of the energy can take on any non-negative real number. For con- tinuous output the additive noise has an exponential density, as for the energy of a sample of complex Gaussian noise. For discrete, or quantized, energy the signal component is randomly distributed according to a Poisson distribution whose mean is the signal energy of the corresponding Gaussian channel; part of the total noise at the channel output is thus a signal-dependent, Poisson noise component. Moreover, the additive noise has a geometric distribution, the discrete counterpart of the exponential density. Contrary to the common engineering wisdom that not using the quadrature amplitude incurs in a signi¯cant performance penalty, it is shown in this dis- sertation that the capacity of the additive energy channels essentially coincides with that of a coherent Gaussian model under a broad set of circumstances. Moreover, common modulation and coding techniques for the Gaussian chan- nel often admit a natural extension to the additive energy channels, and their performance frequently parallels those of the Gaussian channel methods. Four information-theoretic quantities, covering both theoretical and practi- cal aspects of the reliable transmission of information, are studied: the channel capacity, the minimum energy per bit, the constrained capacity when a given digital modulation format is used, and the pairwise error probability. Of these quantities, the channel capacity sets a fundamental limit on the transmission capabilities of the channel but is sometimes di±cult to determine. The min- imum energy per bit (or its inverse, the capacity per unit cost), on the other hand, turns out to be easier to determine, and may be used to analyze the performance of systems operating at low levels of signal energy. Closer to a practical ¯gure of merit is the constrained capacity, which estimates the largest amount of information which can be transmitted by using a speci¯c digital modulation format. Its study is complemented by the computation of the pairwise error probability, an e®ective tool to estimate the performance of practical coded communication systems. Regarding the channel capacity, the capacity of the continuous additive energy channel is found to coincide with that of a Gaussian channel with iden- tical signal-to-noise ratio. Also, an upper bound |the tightest known| to the capacity of the discrete-time Poisson channel is derived. The capacity of the quantized additive energy channel is shown to have two distinct functional forms: if additive noise is dominant, the capacity is close to that of the continu- ous channel with the same energy and noise levels; when Poisson noise prevails, the capacity is similar to that of a discrete-time Poisson channel, with no ad- ditive noise. An analogy with radiation channels of an arbitrary frequency, for which the quanta of energy are photons, is presented. Additive noise is found to be dominant when frequency is low and, simultaneously, the signal-to-noise ratio lies below a threshold; the value of this threshold is well approximated by the expected number of quanta of additive noise. As for the minimum energy per nat (1 nat is log2 e bits, or about 1.4427 bits), it equals the average energy of the additive noise component for all the stud- ied channel models. A similar result was previously known to hold for two particular cases, namely the discrete-time Gaussian and Poisson channels. An extension of digital modulation methods from the Gaussian channels to the additive energy channel is presented, and their constrained capacity determined. Special attention is paid to their asymptotic form of the capacity at low and high levels of signal energy. In contrast to the behaviour in the vi Gaussian channel, arbitrary modulation formats do not achieve the minimum energy per bit at low signal energy. Analytic expressions for the constrained capacity at low signal energy levels are provided. In the high-energy limit simple pulse-energy modulations, which achieve a larger constrained capacity than their counterparts for the Gaussian channel, are presented. As a ¯nal element, the error probability of binary channel codes in the ad- ditive energy channels is studied by analyzing the pairwise error probability, the probability of wrong decision between two alternative binary codewords. Saddlepoint approximations to the pairwise error probability are given, both for binary modulation and for bit-interleaved coded modulation, a simple and e±cient method to use binary codes with non-binary modulations. The meth- ods yield new simple approximations to the error probability in the fading Gaussian channel. The error rates in the continuous additive energy channel are close to those of coherent transmission at identical signal-to-noise ratio. Constellations minimizing the pairwise error probability in the additive energy channels are presented, and their form compared to that of the constellations which maximize the constrained capacity at high signal energy levels

    Performance-Metric Driven Atmospheric Compensation for Robust Free-Space Laser Communication

    Get PDF
    The effect of turbulence on laser propagation is a significant challenge to current electro-optical systems. While atmospheric compensation techniques in space object imaging and high-energy laser weapons have been thoroughly investigated, optimizing these techniques for Laser Communication (LaserCom) has not been examined to the same degree. Average Strehl ratio is the typical design metric for current atmospheric compensation systems. However, fade probability is the relevant metric for LaserCom. This difference motivated the investigation into metric-driven atmospheric compensation. Metric-based tracking techniques for fade mitigation is the first major focus of this research. In a moderate range air-to-air scenario, focal plane spot breakup is the dominant failure mechanism. Although the impact of spot breakup on average Strehl is small, spot breakup considerably increases fade probability. This result demonstrates that optimization of an atmospheric compensation system requires consideration of the metric of interest. Metric-driven design led to exploration of peak intensity tracking, which reduces fade probability by greater than 50% over conventional centroid trackers and Adaptive Optics (AO) systems for scenarios studied. An investigation of atmospheric compensation requirements based on deep fade phenomenology is the second major focus of this research. Fades are classified based on complexity of the required compensation technique. For compensation techniques studied, regions of superior performance, in terms of fade probability, are identified. Peak tracking is shown to outperform AO for thresholds below approximately 4% of the unabberated intensity. Furthermore, the boundary between superior performance regions is nearly invariant to turbulence strength. This boundary invariance simplifies operation of a composite system which is able to adaptively select compensation methodology in near real-time

    The Telecommunications and Data Acquisition Report

    Get PDF
    This quarterly publication provides archival reports on developments in programs managed by JPL's Telecommunications and Mission Operations Directorate (TMOD), which now includes the former Telecommunications and Data Acquisition (TDA) Office. In space communications, radio navigation, radio science, and ground-based radio and radar astronomy, it reports on activities of the Deep Space Network (DSN) in planning, supporting research and technology, implementation, and operations. Also included are standards activity at JPL for space data and information systems and reimbursable DSN work performed for other space agencies through NASA. The preceding work is all performed for NASA's Office of Space Communications (OSC)

    A Survey on Quantum Channel Capacities

    Get PDF
    Quantum information processing exploits the quantum nature of information. It offers fundamentally new solutions in the field of computer science and extends the possibilities to a level that cannot be imagined in classical communication systems. For quantum communication channels, many new capacity definitions were developed in comparison to classical counterparts. A quantum channel can be used to realize classical information transmission or to deliver quantum information, such as quantum entanglement. Here we review the properties of the quantum communication channel, the various capacity measures and the fundamental differences between the classical and quantum channels.Comment: 58 pages, Journal-ref: IEEE Communications Surveys and Tutorials (2018) (updated & improved version of arXiv:1208.1270

    On feedback-based rateless codes for data collection in vehicular networks

    Full text link
    The ability to transfer data reliably and with low delay over an unreliable service is intrinsic to a number of emerging technologies, including digital video broadcasting, over-the-air software updates, public/private cloud storage, and, recently, wireless vehicular networks. In particular, modern vehicles incorporate tens of sensors to provide vital sensor information to electronic control units (ECUs). In the current architecture, vehicle sensors are connected to ECUs via physical wires, which increase the cost, weight and maintenance effort of the car, especially as the number of electronic components keeps increasing. To mitigate the issues with physical wires, wireless sensor networks (WSN) have been contemplated for replacing the current wires with wireless links, making modern cars cheaper, lighter, and more efficient. However, the ability to reliably communicate with the ECUs is complicated by the dynamic channel properties that the car experiences as it travels through areas with different radio interference patterns, such as urban versus highway driving, or even different road quality, which may physically perturb the wireless sensors. This thesis develops a suite of reliable and efficient communication schemes built upon feedback-based rateless codes, and with a target application of vehicular networks. In particular, we first investigate the feasibility of multi-hop networking for intra-car WSN, and illustrate the potential gains of using the Collection Tree Protocol (CTP), the current state of the art in multi-hop data aggregation. Our results demonstrate, for example, that the packet delivery rate of a node using a single-hop topology protocol can be below 80% in practical scenarios, whereas CTP improves reliability performance beyond 95% across all nodes while simultaneously reducing radio energy consumption. Next, in order to migrate from a wired intra-car network to a wireless system, we consider an intermediate step to deploy a hybrid communication structure, wherein wired and wireless networks coexist. Towards this goal, we design a hybrid link scheduling algorithm that guarantees reliability and robustness under harsh vehicular environments. We further enhance the hybrid link scheduler with the rateless codes such that information leakage to an eavesdropper is almost zero for finite block lengths. In addition to reliability, one key requirement for coded communication schemes is to achieve a fast decoding rate. This feature is vital in a wide spectrum of communication systems, including multimedia and streaming applications (possibly inside vehicles) with real-time playback requirements, and delay-sensitive services, where the receiver needs to recover some data symbols before the recovery of entire frame. To address this issue, we develop feedback-based rateless codes with dynamically-adjusted nonuniform symbol selection distributions. Our simulation results, backed by analysis, show that feedback information paired with a nonuniform distribution significantly improves the decoding rate compared with the state of the art algorithms. We further demonstrate that amount of feedback sent can be tuned to the specific transmission properties of a given feedback channel

    Some Notes on Code-Based Cryptography

    Get PDF
    This thesis presents new cryptanalytic results in several areas of coding-based cryptography. In addition, we also investigate the possibility of using convolutional codes in code-based public-key cryptography. The first algorithm that we present is an information-set decoding algorithm, aiming towards the problem of decoding random linear codes. We apply the generalized birthday technique to information-set decoding, improving the computational complexity over previous approaches. Next, we present a new version of the McEliece public-key cryptosystem based on convolutional codes. The original construction uses Goppa codes, which is an algebraic code family admitting a well-defined code structure. In the two constructions proposed, large parts of randomly generated parity checks are used. By increasing the entropy of the generator matrix, this presumably makes structured attacks more difficult. Following this, we analyze a McEliece variant based on quasi-cylic MDPC codes. We show that when the underlying code construction has an even dimension, the system is susceptible to, what we call, a squaring attack. Our results show that the new squaring attack allows for great complexity improvements over previous attacks on this particular McEliece construction. Then, we introduce two new techniques for finding low-weight polynomial multiples. Firstly, we propose a general technique based on a reduction to the minimum-distance problem in coding, which increases the multiplicity of the low-weight codeword by extending the code. We use this algorithm to break some of the instances used by the TCHo cryptosystem. Secondly, we propose an algorithm for finding weight-4 polynomials. By using the generalized birthday technique in conjunction with increasing the multiplicity of the low-weight polynomial multiple, we obtain a much better complexity than previously known algorithms. Lastly, two new algorithms for the learning parities with noise (LPN) problem are proposed. The first one is a general algorithm, applicable to any instance of LPN. The algorithm performs favorably compared to previously known algorithms, breaking the 80-bit security of the widely used (512,1/8) instance. The second one focuses on LPN instances over a polynomial ring, when the generator polynomial is reducible. Using the algorithm, we break an 80-bit security instance of the Lapin cryptosystem
    corecore