68 research outputs found
Randomness amplification against no-signaling adversaries using two devices
Recently, a physically realistic protocol amplifying the randomness of
Santha-Vazirani sources producing cryptographically secure random bits was
proposed; however for reasons of practical relevance, the crucial question
remained open whether this can be accomplished under the minimal conditions
necessary for the task. Namely, is it possible to achieve randomness
amplification using only two no-signaling components and in a situation where
the violation of a Bell inequality only guarantees that some outcomes of the
device for specific inputs exhibit randomness? Here, we solve this question and
present a device-independent protocol for randomness amplification of
Santha-Vazirani sources using a device consisting of two non-signaling
components. We show that the protocol can amplify any such source that is not
fully deterministic into a fully random source while tolerating a constant
noise rate and prove the composable security of the protocol against general
no-signaling adversaries. Our main innovation is the proof that even the
partial randomness certified by the two-party Bell test (a single input-output
pair () for which the conditional probability
is bounded away from for all no-signaling
strategies that optimally violate the Bell inequality) can be used for
amplification. We introduce the methodology of a partial tomographic procedure
on the empirical statistics obtained in the Bell test that ensures that the
outputs constitute a linear min-entropy source of randomness. As a technical
novelty that may be of independent interest, we prove that the Santha-Vazirani
source satisfies an exponential concentration property given by a recently
discovered generalized Chernoff bound.Comment: 15 pages, 3 figure
Online Linear Extractors for Independent Sources
In this work, we characterize online linear extractors. In other words, given a matrix , we study the convergence of the iterated process , where is repeatedly sampled independently from some fixed (but unknown) distribution with (min)-entropy at least . Here, we think of as the state of an online extractor, and as its input.
As our main result, we show that the state converges to the uniform distribution for all input distributions with entropy if and only if the matrix has no non-trivial invariant subspace (i.e., a non-zero subspace such that ). In other words, a matrix yields an online linear extractor if and only if has no non-trivial invariant subspace. For example, the linear transformation corresponding to multiplication by a generator of the field yields a good online linear extractor. Furthermore, for any such matrix convergence takes at most steps.
We also study the more general notion of condensing---that is, we ask when this process converges to a distribution with entropy at least , when the input distribution has entropy greater than . (Extractors corresponding to the special case when .) We show that a matrix gives a good condenser if there are relatively few vectors such that are linearly dependent. As an application, we show that the very simple cyclic rotation transformation condenses to bits for any if is a prime satisfying a certain simple number-theoretic condition.
Our proofs are Fourier-analytic and rely on a novel lemma, which gives a tight bound on the product of certain Fourier coefficients of any entropic distribution
Multi-party Poisoning through Generalized -Tampering
In a poisoning attack against a learning algorithm, an adversary tampers with
a fraction of the training data with the goal of increasing the
classification error of the constructed hypothesis/model over the final test
distribution. In the distributed setting, might be gathered gradually from
data providers who generate and submit their shares of
in an online way.
In this work, we initiate a formal study of -poisoning attacks in
which an adversary controls of the parties, and even for each
corrupted party , the adversary submits some poisoned data on
behalf of that is still "-close" to the correct data (e.g.,
fraction of is still honestly generated). For , this model
becomes the traditional notion of poisoning, and for it coincides with
the standard notion of corruption in multi-party computation.
We prove that if there is an initial constant error for the generated
hypothesis , there is always a -poisoning attacker who can decrease
the confidence of (to have a small error), or alternatively increase the
error of , by . Our attacks can be implemented in
polynomial time given samples from the correct data, and they use no wrong
labels if the original distributions are not noisy.
At a technical level, we prove a general lemma about biasing bounded
functions through an attack model in which each
block might be controlled by an adversary with marginal probability
in an online way. When the probabilities are independent, this coincides with
the model of -tampering attacks, thus we call our model generalized
-tampering. We prove the power of such attacks by incorporating ideas from
the context of coin-flipping attacks into the -tampering model and
generalize the results in both of these areas
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
No-signalling attacks and implications for (quantum) nonlocality distillation
The phenomenon of nonlocality, which can arise when entangled quantum systems are suitably measured, is perhaps one of the most puzzling features of quantum theory to the philosophical mind. It implies that these measurement statistics cannot be explained by hidden variables, as requested by Einstein, and it thus suggests that our universe may not be, in principle, a well-determined entity where the uncertainty we perceive in physical observations stems only from our lack of knowledge of the whole. Besides its philosophical impact, nonlocality is also a resource for information- theoretic tasks since it implies secrecy: If nonlocality limits the predictive power that any hidden variable (in the universe) can have about some observations, then it limits in particular the predictive power of a hidden variable held by an adversary in a cryptographic scenario. We investigate whether nonlocality alone can empower two parties to perform unconditionally secure communication in a feasible manner when only a provably minimal set of assumptions are made for such a task to be possible — independently of the validity of any physical theory (such as quantum theory). Nonlocality has also been of interest in the study of foundations of quantum theory and the principles that stand beyond its mathematical formalism. In an attempt to single out quantum theory within a broader set of theories, the study of nonlocality may help to point out intuitive principles that distinguish it from the rest. In theories where the limits by which quantum theory constrains the strength of nonlocality are surpassed, many “principles” on which an information theorist would rely on are shattered — one example is the hierarchy of communication complexity as the latter becomes completely trivial once a certain degree of nonlocality is overstepped. In order to study the structure of such super-quantum theories — beyond their aforementioned secrecy aspects — we investigate the phenomenon of distillation of nonlocality, the ability to distill stronger forms of nonlocality from weaker ones. By exploiting the inherent connection between nonlocality and secrecy, we provide a novel way of deriving bounds on nonlocality-distillation protocols through an ad
versarial view to the problem
- …