68 research outputs found

    Randomness amplification against no-signaling adversaries using two devices

    Get PDF
    Recently, a physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however for reasons of practical relevance, the crucial question remained open whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two non-signaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test (a single input-output pair (u,x\textbf{u}^*, \textbf{x}^*) for which the conditional probability P(xu)P(\textbf{x}^* | \textbf{u}^*) is bounded away from 11 for all no-signaling strategies that optimally violate the Bell inequality) can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.Comment: 15 pages, 3 figure

    Online Linear Extractors for Independent Sources

    Get PDF
    In this work, we characterize online linear extractors. In other words, given a matrix AF2n×nA \in \mathbb{F}_2^{n \times n}, we study the convergence of the iterated process SASX\mathbf{S} \leftarrow A\mathbf{S} \oplus \mathbf{X} , where XD\mathbf{X} \sim D is repeatedly sampled independently from some fixed (but unknown) distribution DD with (min)-entropy at least kk. Here, we think of S{0,1}n\mathbf{S} \in \{0,1\}^n as the state of an online extractor, and X{0,1}n\mathbf{X} \in \{0,1\}^n as its input. As our main result, we show that the state S\mathbf{S} converges to the uniform distribution for all input distributions DD with entropy k>0k > 0 if and only if the matrix AA has no non-trivial invariant subspace (i.e., a non-zero subspace VF2nV \subsetneq \mathbb{F}_2^n such that AVVAV \subseteq V). In other words, a matrix AA yields an online linear extractor if and only if AA has no non-trivial invariant subspace. For example, the linear transformation corresponding to multiplication by a generator of the field F2n\mathbb{F}_{2^n} yields a good online linear extractor. Furthermore, for any such matrix convergence takes at most O~(n2(k+1)/k2)\widetilde{O}(n^2(k+1)/k^2) steps. We also study the more general notion of condensing---that is, we ask when this process converges to a distribution with entropy at least \ell, when the input distribution has entropy greater than kk. (Extractors corresponding to the special case when =n\ell = n.) We show that a matrix gives a good condenser if there are relatively few vectors wF2n\mathbf{w} \in \mathbb{F}_2^n such that w,ATw,,(AT)nk1w\mathbf{w}, A^T\mathbf{w}, \ldots, (A^T)^{n-k-1} \mathbf{w} are linearly dependent. As an application, we show that the very simple cyclic rotation transformation A(x1,,xn)=(xn,x1,,xn1)A(x_1,\ldots, x_n) = (x_n,x_1,\ldots, x_{n-1}) condenses to =n1\ell = n-1 bits for any k>1k > 1 if nn is a prime satisfying a certain simple number-theoretic condition. Our proofs are Fourier-analytic and rely on a novel lemma, which gives a tight bound on the product of certain Fourier coefficients of any entropic distribution

    Multi-party Poisoning through Generalized pp-Tampering

    Get PDF
    In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data TT with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, TT might be gathered gradually from mm data providers P1,,PmP_1,\dots,P_m who generate and submit their shares of TT in an online way. In this work, we initiate a formal study of (k,p)(k,p)-poisoning attacks in which an adversary controls k[n]k\in[n] of the parties, and even for each corrupted party PiP_i, the adversary submits some poisoned data TiT'_i on behalf of PiP_i that is still "(1p)(1-p)-close" to the correct data TiT_i (e.g., 1p1-p fraction of TiT'_i is still honestly generated). For k=mk=m, this model becomes the traditional notion of poisoning, and for p=1p=1 it coincides with the standard notion of corruption in multi-party computation. We prove that if there is an initial constant error for the generated hypothesis hh, there is always a (k,p)(k,p)-poisoning attacker who can decrease the confidence of hh (to have a small error), or alternatively increase the error of hh, by Ω(pk/m)\Omega(p \cdot k/m). Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy. At a technical level, we prove a general lemma about biasing bounded functions f(x1,,xn)[0,1]f(x_1,\dots,x_n)\in[0,1] through an attack model in which each block xix_i might be controlled by an adversary with marginal probability pp in an online way. When the probabilities are independent, this coincides with the model of pp-tampering attacks, thus we call our model generalized pp-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the pp-tampering model and generalize the results in both of these areas

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    No-signalling attacks and implications for (quantum) nonlocality distillation

    Get PDF
    The phenomenon of nonlocality, which can arise when entangled quantum systems are suitably measured, is perhaps one of the most puzzling features of quantum theory to the philosophical mind. It implies that these measurement statistics cannot be explained by hidden variables, as requested by Einstein, and it thus suggests that our universe may not be, in principle, a well-determined entity where the uncertainty we perceive in physical observations stems only from our lack of knowledge of the whole. Besides its philosophical impact, nonlocality is also a resource for information- theoretic tasks since it implies secrecy: If nonlocality limits the predictive power that any hidden variable (in the universe) can have about some observations, then it limits in particular the predictive power of a hidden variable held by an adversary in a cryptographic scenario. We investigate whether nonlocality alone can empower two parties to perform unconditionally secure communication in a feasible manner when only a provably minimal set of assumptions are made for such a task to be possible — independently of the validity of any physical theory (such as quantum theory). Nonlocality has also been of interest in the study of foundations of quantum theory and the principles that stand beyond its mathematical formalism. In an attempt to single out quantum theory within a broader set of theories, the study of nonlocality may help to point out intuitive principles that distinguish it from the rest. In theories where the limits by which quantum theory constrains the strength of nonlocality are surpassed, many “principles” on which an information theorist would rely on are shattered — one example is the hierarchy of communication complexity as the latter becomes completely trivial once a certain degree of nonlocality is overstepped. In order to study the structure of such super-quantum theories — beyond their aforementioned secrecy aspects — we investigate the phenomenon of distillation of nonlocality, the ability to distill stronger forms of nonlocality from weaker ones. By exploiting the inherent connection between nonlocality and secrecy, we provide a novel way of deriving bounds on nonlocality-distillation protocols through an ad versarial view to the problem
    corecore