42 research outputs found

    Enhancing quantum entropy in vacuum-based quantum random number generator

    Full text link
    Information-theoretically provable unique true random numbers, which cannot be correlated or controlled by an attacker, can be generated based on quantum measurement of vacuum state and universal-hashing randomness extraction. Quantum entropy in the measurements decides the quality and security of the random number generator. At the same time, it directly determine the extraction ratio of true randomness from the raw data, in other words, it affects quantum random numbers generating rate obviously. In this work, considering the effects of classical noise, the best way to enhance quantum entropy in the vacuum-based quantum random number generator is explored in the optimum dynamical analog-digital converter (ADC) range scenario. The influence of classical noise excursion, which may be intrinsic to a system or deliberately induced by an eavesdropper, on the quantum entropy is derived. We propose enhancing local oscillator intensity rather than electrical gain for noise-independent amplification of quadrature fluctuation of vacuum state. Abundant quantum entropy is extractable from the raw data even when classical noise excursion is large. Experimentally, an extraction ratio of true randomness of 85.3% is achieved by finite enhancement of the local oscillator power when classical noise excursions of the raw data is obvious.Comment: 12 pages,8 figure

    On the Commitment Capacity of Unfair Noisy Channels

    Get PDF
    Noisy channels are a valuable resource from a cryptographic point of view. They can be used for exchanging secret-keys as well as realizing other cryptographic primitives such as commitment and oblivious transfer. To be really useful, noisy channels have to be consider in the scenario where a cheating party has some degree of control over the channel characteristics. Damg\r{a}rd et al. (EUROCRYPT 1999) proposed a more realistic model where such level of control is permitted to an adversary, the so called unfair noisy channels, and proved that they can be used to obtain commitment and oblivious transfer protocols. Given that noisy channels are a precious resource for cryptographic purposes, one important question is determining the optimal rate in which they can be used. The commitment capacity has already been determined for the cases of discrete memoryless channels and Gaussian channels. In this work we address the problem of determining the commitment capacity of unfair noisy channels. We compute a single-letter characterization of the commitment capacity of unfair noisy channels. In the case where an adversary has no control over the channel (the fair case) our capacity reduces to the well-known capacity of a discrete memoryless binary symmetric channel

    Parameterized Streaming Algorithms for Vertex Cover

    Full text link
    As graphs continue to grow in size, we seek ways to effectively process such data at scale. The model of streaming graph processing, in which a compact summary is maintained as each edge insertion/deletion is observed, is an attractive one. However, few results are known for optimization problems over such dynamic graph streams. In this paper, we introduce a new approach to handling graph streams, by instead seeking solutions for the parameterized versions of these problems where we are given a parameter kk and the objective is to decide whether there is a solution bounded by kk. By combining kernelization techniques with randomized sketch structures, we obtain the first streaming algorithms for the parameterized versions of the Vertex Cover problem. We consider the following three models for a graph stream on nn nodes: 1. The insertion-only model where the edges can only be added. 2. The dynamic model where edges can be both inserted and deleted. 3. The \emph{promised} dynamic model where we are guaranteed that at each timestamp there is a solution of size at most kk. In each of these three models we are able to design parameterized streaming algorithms for the Vertex Cover problem. We are also able to show matching lower bound for the space complexity of our algorithms. (Due to the arXiv limit of 1920 characters for abstract field, please see the abstract in the paper for detailed description of our results)Comment: Fixed some typo

    Pb-Hash: Partitioned b-bit Hashing

    Full text link
    Many hashing algorithms including minwise hashing (MinHash), one permutation hashing (OPH), and consistent weighted sampling (CWS) generate integers of BB bits. With kk hashes for each data vector, the storage would be B×kB\times k bits; and when used for large-scale learning, the model size would be 2B×k2^B\times k, which can be expensive. A standard strategy is to use only the lowest bb bits out of the BB bits and somewhat increase kk, the number of hashes. In this study, we propose to re-use the hashes by partitioning the BB bits into mm chunks, e.g., b×m=Bb\times m =B. Correspondingly, the model size becomes m×2b×km\times 2^b \times k, which can be substantially smaller than the original 2B×k2^B\times k. Our theoretical analysis reveals that by partitioning the hash values into mm chunks, the accuracy would drop. In other words, using mm chunks of B/mB/m bits would not be as accurate as directly using BB bits. This is due to the correlation from re-using the same hash. On the other hand, our analysis also shows that the accuracy would not drop much for (e.g.,) m=24m=2\sim 4. In some regions, Pb-Hash still works well even for mm much larger than 4. We expect Pb-Hash would be a good addition to the family of hashing methods/applications and benefit industrial practitioners. We verify the effectiveness of Pb-Hash in machine learning tasks, for linear SVM models as well as deep learning models. Since the hashed data are essentially categorical (ID) features, we follow the standard practice of using embedding tables for each hash. With Pb-Hash, we need to design an effective strategy to combine mm embeddings. Our study provides an empirical evaluation on four pooling schemes: concatenation, max pooling, mean pooling, and product pooling. There is no definite answer which pooling would be always better and we leave that for future study

    Quantum Cryptography Based Solely on Bell's Theorem

    Full text link
    Information-theoretic key agreement is impossible to achieve from scratch and must be based on some - ultimately physical - premise. In 2005, Barrett, Hardy, and Kent showed that unconditional security can be obtained in principle based on the impossibility of faster-than-light signaling; however, their protocol is inefficient and cannot tolerate any noise. While their key-distribution scheme uses quantum entanglement, its security only relies on the impossibility of superluminal signaling, rather than the correctness and completeness of quantum theory. In particular, the resulting security is device independent. Here we introduce a new protocol which is efficient in terms of both classical and quantum communication, and that can tolerate noise in the quantum channel. We prove that it offers device-independent security under the sole assumption that certain non-signaling conditions are satisfied. Our main insight is that the XOR of a number of bits that are partially secret according to the non-signaling conditions turns out to be highly secret. Note that similar statements have been well-known in classical contexts. Earlier results had indicated that amplification of such non-signaling-based privacy is impossible to achieve if the non-signaling condition only holds between events on Alice's and Bob's sides. Here, we show that the situation changes completely if such a separation is given within each of the laboratories.Comment: 32 pages, v2: changed introduction, added reference

    A Security Analysis of the Composition of ChaCha20 and Poly1305

    Get PDF
    This note contains a security reduction to demonstrate that Langley\u27s composition of Bernstein\u27s ChaCha20 and Poly1305, as proposed for use in IETF protocols, is a secure authenticated encryption scheme. The reduction assumes that ChaCha20 is a PRF, that Poly1305 is epsilon-almost-Delta-universal, and that the adversary is nonce respecting

    Unconditional security from noisy quantum storage

    Full text link
    We consider the implementation of two-party cryptographic primitives based on the sole assumption that no large-scale reliable quantum storage is available to the cheating party. We construct novel protocols for oblivious transfer and bit commitment, and prove that realistic noise levels provide security even against the most general attack. Such unconditional results were previously only known in the so-called bounded-storage model which is a special case of our setting. Our protocols can be implemented with present-day hardware used for quantum key distribution. In particular, no quantum storage is required for the honest parties.Comment: 25 pages (IEEE two column), 13 figures, v4: published version (to appear in IEEE Transactions on Information Theory), including bit wise min-entropy sampling. however, for experimental purposes block sampling can be much more convenient, please see v3 arxiv version if needed. See arXiv:0911.2302 for a companion paper addressing aspects of a practical implementation using block samplin
    corecore