1,252 research outputs found

    Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels

    Get PDF
    A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity I(W)I(W) of any given binary-input discrete memoryless channel (B-DMC) WW. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of NN independent copies of a given B-DMC WW, a second set of NN binary-input channels {WN(i):1iN}\{W_N^{(i)}:1\le i\le N\} such that, as NN becomes large, the fraction of indices ii for which I(WN(i))I(W_N^{(i)}) is near 1 approaches I(W)I(W) and the fraction for which I(WN(i))I(W_N^{(i)}) is near 0 approaches 1I(W)1-I(W). The polarized channels {WN(i)}\{W_N^{(i)}\} are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC WW with I(W)>0I(W)>0 and any target rate R<I(W)R < I(W), there exists a sequence of polar codes {Cn;n1}\{{\mathscr C}_n;n\ge 1\} such that Cn{\mathscr C}_n has block-length N=2nN=2^n, rate R\ge R, and probability of block error under successive cancellation decoding bounded as P_{e}(N,R) \le \bigoh(N^{-\frac14}) independently of the code rate. This performance is achievable by encoders and decoders with complexity O(NlogN)O(N\log N) for each.Comment: The version which appears in the IEEE Transactions on Information Theory, July 200

    A study of major coding techniques for digital communication Final report

    Get PDF
    Coding techniques for digital communication channel

    The 1st Conference of PhD Students in Computer Science

    Get PDF

    Secure and Efficient Comparisons between Untrusted Parties

    Get PDF
    A vast number of online services is based on users contributing their personal information. Examples are manifold, including social networks, electronic commerce, sharing websites, lodging platforms, and genealogy. In all cases user privacy depends on a collective trust upon all involved intermediaries, like service providers, operators, administrators or even help desk staff. A single adversarial party in the whole chain of trust voids user privacy. Even more, the number of intermediaries is ever growing. Thus, user privacy must be preserved at every time and stage, independent of the intrinsic goals any involved party. Furthermore, next to these new services, traditional offline analytic systems are replaced by online services run in large data centers. Centralized processing of electronic medical records, genomic data or other health-related information is anticipated due to advances in medical research, better analytic results based on large amounts of medical information and lowered costs. In these scenarios privacy is of utmost concern due to the large amount of personal information contained within the centralized data. We focus on the challenge of privacy-preserving processing on genomic data, specifically comparing genomic sequences. The problem that arises is how to efficiently compare private sequences of two parties while preserving confidentiality of the compared data. It follows that the privacy of the data owner must be preserved, which means that as little information as possible must be leaked to any party participating in the comparison. Leakage can happen at several points during a comparison. The secured inputs for the comparing party might leak some information about the original input, or the output might leak information about the inputs. In the latter case, results of several comparisons can be combined to infer information about the confidential input of the party under observation. Genomic sequences serve as a use-case, but the proposed solutions are more general and can be applied to the generic field of privacy-preserving comparison of sequences. The solution should be efficient such that performing a comparison yields runtimes linear in the length of the input sequences and thus producing acceptable costs for a typical use-case. To tackle the problem of efficient, privacy-preserving sequence comparisons, we propose a framework consisting of three main parts. a) The basic protocol presents an efficient sequence comparison algorithm, which transforms a sequence into a set representation, allowing to approximate distance measures over input sequences using distance measures over sets. The sets are then represented by an efficient data structure - the Bloom filter -, which allows evaluation of certain set operations without storing the actual elements of the possibly large set. This representation yields low distortion for comparing similar sequences. Operations upon the set representation are carried out using efficient, partially homomorphic cryptographic systems for data confidentiality of the inputs. The output can be adjusted to either return the actual approximated distance or the result of an in-range check of the approximated distance. b) Building upon this efficient basic protocol we introduce the first mechanism to reduce the success of inference attacks by detecting and rejecting similar queries in a privacy-preserving way. This is achieved by generating generalized commitments for inputs. This generalization is done by treating inputs as messages received from a noise channel, upon which error-correction from coding theory is applied. This way similar inputs are defined as inputs having a hamming distance of their generalized inputs below a certain predefined threshold. We present a protocol to perform a zero-knowledge proof to assess if the generalized input is indeed a generalization of the actual input. Furthermore, we generalize a very efficient inference attack on privacy-preserving sequence comparison protocols and use it to evaluate our inference-control mechanism. c) The third part of the framework lightens the computational load of the client taking part in the comparison protocol by presenting a compression mechanism for partially homomorphic cryptographic schemes. It reduces the transmission and storage overhead induced by the semantically secure homomorphic encryption schemes, as well as encryption latency. The compression is achieved by constructing an asymmetric stream cipher such that the generated ciphertext can be converted into a ciphertext of an associated homomorphic encryption scheme without revealing any information about the plaintext. This is the first compression scheme available for partially homomorphic encryption schemes. Compression of ciphertexts of fully homomorphic encryption schemes are several orders of magnitude slower at the conversion from the transmission ciphertext to the homomorphically encrypted ciphertext. Indeed our compression scheme achieves optimal conversion performance. It further allows to generate keystreams offline and thus supports offloading to trusted devices. This way transmission-, storage- and power-efficiency is improved. We give security proofs for all relevant parts of the proposed protocols and algorithms to evaluate their security. A performance evaluation of the core components demonstrates the practicability of our proposed solutions including a theoretical analysis and practical experiments to show the accuracy as well as efficiency of approximations and probabilistic algorithms. Several variations and configurations to detect similar inputs are studied during an in-depth discussion of the inference-control mechanism. A human mitochondrial genome database is used for the practical evaluation to compare genomic sequences and detect similar inputs as described by the use-case. In summary we show that it is indeed possible to construct an efficient and privacy-preserving (genomic) sequences comparison, while being able to control the amount of information that leaves the comparison. To the best of our knowledge we also contribute to the field by proposing the first efficient privacy-preserving inference detection and control mechanism, as well as the first ciphertext compression system for partially homomorphic cryptographic systems

    Formal Methods in Quantum Circuit Design

    Get PDF
    The design and compilation of correct, efficient quantum circuits is integral to the future operation of quantum computers. This thesis makes contributions to the problems of optimizing and verifying quantum circuits, with an emphasis on the development of formal models for such purposes. We also present software implementations of these methods, which together form a full stack of tools for the design of optimized, formally verified quantum oracles. On the optimization side, we study methods for the optimization of Rz and CNOT gates in Clifford+Rz circuits. We develop a general, efficient optimization algorithm called phase folding, which reduces the number of Rz gates without increasing any metrics by computing its phase polynomial. This algorithm can further be combined with synthesis techniques for CNOT-dihedral operators to optimize circuits with respect to particular costs. We then study the optimal synthesis problem for CNOT-dihedral operators from the perspectives of Rz and CNOT gate optimization. In the case of Rz gate optimization, we show that the optimal synthesis problem is polynomial-time equivalent to minimum-distance decoding in certain Reed-Muller codes. For the CNOT optimization problem, we show that the optimal synthesis problem is at least as hard as a combinatorial problem related to Gray codes. In both cases, we develop heuristics for the optimal synthesis problem, which together with phase folding reduces T counts by 42% and CNOT counts by 22% across a suite of real-world benchmarks. From the perspective of formal verification, we make two contributions. The first is the development of a formal model of quantum circuits with ancillary bits based on the Feynman path integral, along with a concrete verification algorithm. The path integral model, with some syntactic sugar, further doubles as a natural specification language for quantum computations. Our experiments show some practical circuits with up to hundreds of qubits can be efficiently verified. Our second contribution is a formally verified, optimizing compiler for reversible circuits. The compiler compiles a classical, irreversible language to reversible circuits, with a formal, machine-checked proof of correctness written in the proof assistant F*. The compiler is structured as a partial evaluator, allowing verification to be carried out significantly faster than previous results

    Easily decoded error-correcting codes and techniques for their generation

    Get PDF
    Imperial Users onl

    A local scattering theory for the effects of isolated roughness on boundary-layer instability and transition: transmission coefficient as an eigenvalue

    No full text
    This paper is concerned with the rather broad issue of the impact of abrupt changes (such as isolated roughness, gaps and local suctions) on boundary-layer transition. To fix the idea, we consider the influence of a two-dimensional localized hump (or indentation) on an oncoming Tollmien-Schlichting (T-S) wave. We show that when the length scale of the former is comparable with the characteristic wavelength of the latter, the key physical mechanism to affect transition is through scattering of T-S waves by the roughness-induced mean-flow distortion. An appropriate mathematical theory, consisting of the boundary-value problem governing the local scattering, is formulated based on triple deck formalism. The transmission co efficient, defined as the ratio of the amplitude of the T-S wave downstream the roughness to that upstream, serves to characterize the impact on transition. The transmission coefficient appears as the eigenvalue of the discretized boundary-value problem. The latter is solved numerically, and the dependence of the eigenvalue on the height and width of the roughness and the frequency of the T-S wave is investigated. For a roughness element without causing separation, the transmission coefficient is found to be about 1:5 for typical frequencies, indicating a moderate but appreciable destabilizing effect. For a roughness causing incipient separation, the transmission coefficient can be as large as O(10), suggesting that immediate transition may take place at the roughness site. A roughness element with a fixed height produces the strongest impact when its width is comparable with the T-S wavelength, in which case the traditional linear stability theory is in valid. The latter how ever holds approximately when the roughness width is sufficiently large. By studying the two-hump case, a criterion when two roughness elements can be regarded as being isolated is suggested. The transmission coefficient can be converted to an equivalent N-factor increment, by making use of which the eN-method can be extended to predict transition in the presence of multiple roughness elements
    corecore