95 research outputs found
On the Orthogonal Vector Problem and the Feasibility of Unconditionally Secure Leakage-Resilient Computation
We consider unconditionally secure leakage resilient two-party
computation, where security means that the leakage obtained by an
adversary can be simulated using a similar amount of leakage from the
private inputs or outputs. A related problem is known as circuit
compilation, where there is only one device doing a computation on
public input and output. Here the goal is to ensure that the adversary
learns only the input/output behaviour of the computation, even given
leakage from the internal state of the device. We study these
problems in an enhanced version of the ``only computation leaks\u27\u27
model, where the adversary is additionally allowed a bounded amount of
{\em global} leakage from the state of the entity under attack. In
this model, we show the first unconditionally secure leakage resilient
two-party computation protocol. The protocol assumes access to
correlated randomness in the form of a functionality \fOrt that
outputs pairs of orthogonal vectors over some
finite field, where the adversary can leak independently from
and from . We also construct a general circuit
compiler secure in the same leakage model. Our constructions work,
even if the adversary is allowed to corrupt a constant fraction of the
calls to \fOrt and decide which vectors should be output. On the
negative side, we show that unconditionally secure two-party
computation and circuit compilation are in general impossible in the
plain version of our model. For circuit compilation we need a
computational assumption to exhibit a function that cannot be securely
computed, on the other hand impossibility holds even if global leakage
is not allowed. It follows that even a somewhat unreliable version of
\fOrt cannot be implemented with unconditional security in the plain
leakage model, using classical communication. However, we show that an
implementation using quantum communication does exist. In particular,
we propose a simple ``prepare-and-measure\u27\u27 type protocol which we
show secure using a new result on sampling from a quantum
population. Although the protocol may produce a small number of
incorrect pairs, this is sufficient for leakage resilient computation
by our other results
Quantum Cryptography Beyond Quantum Key Distribution
Quantum cryptography is the art and science of exploiting quantum mechanical
effects in order to perform cryptographic tasks. While the most well-known
example of this discipline is quantum key distribution (QKD), there exist many
other applications such as quantum money, randomness generation, secure two-
and multi-party computation and delegated quantum computation. Quantum
cryptography also studies the limitations and challenges resulting from quantum
adversaries---including the impossibility of quantum bit commitment, the
difficulty of quantum rewinding and the definition of quantum security models
for classical primitives. In this review article, aimed primarily at
cryptographers unfamiliar with the quantum world, we survey the area of
theoretical quantum cryptography, with an emphasis on the constructions and
limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference
Secure certification of mixed quantum states with application to two-party randomness generation
We investigate sampling procedures that certify that an arbitrary quantum
state on subsystems is close to an ideal mixed state
for a given reference state , up to errors on a few positions. This
task makes no sense classically: it would correspond to certifying that a given
bitstring was generated according to some desired probability distribution.
However, in the quantum case, this is possible if one has access to a prover
who can supply a purification of the mixed state.
In this work, we introduce the concept of mixed-state certification, and we
show that a natural sampling protocol offers secure certification in the
presence of a possibly dishonest prover: if the verifier accepts then he can be
almost certain that the state in question has been correctly prepared, up to a
small number of errors.
We then apply this result to two-party quantum coin-tossing. Given that
strong coin tossing is impossible, it is natural to ask "how close can we get".
This question has been well studied and is nowadays well understood from the
perspective of the bias of individual coin tosses. We approach and answer this
question from a different---and somewhat orthogonal---perspective, where we do
not look at individual coin tosses but at the global entropy instead. We show
how two distrusting parties can produce a common high-entropy source, where the
entropy is an arbitrarily small fraction below the maximum (except with
negligible probability)
Secure certification of mixed quantum states with application to two-party randomness generation
We investigate sampling procedures that certify that an arbitrary quantum state on n subsystems is close to an ideal mixed state ⊗ for a given reference state , up to errors on a few positions. This task makes no sense classically: it would correspond to certifying that a given bitstring was generated according to some desired probability distribution. However, in the quantum case, this is possible if one has access to a prover who can supply a purification of the mixed state.
In this work, we introduce the concept of mixed-state certification, and we show that a natural sampling protocol offers secure certification in the presence of a possibly dishonest prover: if the verifier accepts then he can be almost certain that the state in question has been correctly prepared, up to a small number of errors.
We then apply this result to two-party quantum coin-tossing. Given that strong coin tossing is impossible, it is natural to ask “how close can we get”. This question has been well studied and is nowadays well understood from the perspective of the bias of individual coin tosses. We approach and answer this question from a different—and somewhat orthogonal—perspective, where we do not look at individual coin tosses but at the global entropy instead. We show how two distrusting parties can produce a common high-entropy source, where the entropy is an arbitrarily small fraction below the maximum
Leakage-Tolerant Circuits
A leakage-resilient circuit for is a randomized Boolean circuit mapping a randomized encoding of an input to an encoding of , such that applying any leakage function to the wires of reveals essentially nothing about . A leakage-tolerant circuit achieves the stronger guarantee that even when and are not protected by any encoding, the output of can be simulated by applying some to and alone. Thus, is as secure as an ideal hardware implementation of with respect to leakage from .
Leakage-resilient circuits were constructed for low-complexity classes , including (length- output) functions, parities, and functions with bounded communication complexity. In contrast, leakage-tolerant circuits were only known for the simple case of probing leakage, where outputs the values of wires in .
We initiate a systematic study of leakage-tolerant circuits for natural classes of global leakage functions, obtaining the following main results.
Leakage-tolerant circuits for depth-1 leakage. Every circuit for can be efficiently compiled into an -tolerant circuit for , where includes all leakage functions that output either parities or disjunctions (alternatively, conjunctions) of any number of wires or their negations. In the case of parities, our simulator runs in time. We provide partial evidence that this may be inherent.
Application to stateful leakage-resilient circuits. Using a general transformation from leakage-tolerant circuits, we obtain the first construction of stateful -leakage-resilient circuits that tolerate a continuous parity leakage, and the first such construction for disjunction/conjunction leakage in which the circuit size grows sub-quadratically with . Interestingly, here we can obtain -time simulation even in the case of parities
A Survey of Leakage-Resilient Cryptography
In the past 15 years, cryptography has made considerable progress in expanding the adversarial attack model to cover side-channel attacks, and has built schemes to provably defend against some of them. This survey covers the main models and results in this so-called leakage-resilient cryptography
Classical processing algorithms for Quantum Information Security
In this thesis, we investigate how the combination of quantum physics and information theory could deliver solutions at the forefront of information security, and, in particular, we consider two focus applications: randomness extraction as applied to quantum random number generators and classical processing algorithms for quantum key distribution (QKD).
We concentrate on practical applications for such tools.
We detail the implementation of a randomness extractor for a commercial quantum random number generator, and we evaluate its performance based on information theory.
Then, we focus on QKD as applied to a specific experimental scenario, that is, the one of free-space quantum links. Commercial solutions with quantum links operating over optical fibers, in fact, already exist, but suffer from severe infrastructure complexity and cost overheads. Free-space QKD allows for a higher flexibility, for both terrestrial and satellite links, whilst experiencing higher attenuation and noise at the receiver. In this work, its feasibility is investigated and proven in multiple experiments over links of different length, and in various channel conditions. In particular, after a thorough analysis of information reconciliation protocols, we consider finite-key effects as applied to key distillation, and we propose a novel adaptive real-time selection algorithm which, by leveraging the turbulence of the channel as a resource, extends the feasibility of QKD to new noise thresholds.
By using a full-fledged software for classical processing tailored for the considered application scenario, the obtained results are analyzed and validated, showing that quantum information security can be ensured in realistic conditions with free-space quantum links
Recommended from our members
On Resilience to Computable Tampering
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS 2010), provide a means of encoding information such that if the encoding is tampered with, the result encodes something either identical or completely unrelated. Unlike error-correcting codes (for which the result of tampering must always be identical), non-malleable codes give guarantees even when tampering functions are allowed to change every symbol of a codeword.
In this thesis, we will provide constructions of non-malleable codes secure against a variety tampering classes with natural computational semantics:
• Bounded-Communication: Functions corresponding to 2-party protocols where each party receives half the input (respectively) and then may communicate </4 bits before returning their (respective) half of the tampered output.
•Local Functions (Juntas):} each tampered output bit is only a function of n¹-ẟ inputs bits, where ẟ>0 is any constant (the efficiency of our code depends on ẟ). This class includes NC⁰.
•Decision Trees: each tampered output bit is a function of n¹/⁴-⁰(¹) adaptively chosen bits.
•Small-Depth Circuits: each tampered output bit is produced by a log(n)/log log(n)-depth circuit of polynomial size, for some constant . This class includes AC⁰.
•Low Degree Polynomials: each tampered output field element is produced by a low-degree (relative to the field size) polynomial.
•Polynomial-Size Circuit Tampering: each tampered codeword is produced by circuit of size ᶜ where is any constant (the efficiency of our code depends on ). This result assumes that E is hard for exponential size nondeterministic circuits (all other results are unconditional).
We stress that our constructions are efficient (encoding and decoding can be performed in uniform polynomial time) and (with the exception of the last result, which assumes strong circuit lower bounds) enjoy unconditional, statistical security guarantees. We also illuminate some potential barriers to constructing codes for more complex computational classes from simpler assumptions
Secure multi-party protocols under a modern lens
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 263-272).A secure multi-party computation (MPC) protocol for computing a function f allows a group of parties to jointly evaluate f over their private inputs, such that a computationally bounded adversary who corrupts a subset of the parties can not learn anything beyond the inputs of the corrupted parties and the output of the function f. General MPC completeness theorems in the 1980s showed that every efficiently computable function can be evaluated securely in this fashion [Yao86, GMW87, CCD87, BGW88] using the existence of cryptography. In the following decades, progress has been made toward making MPC protocols efficient enough to be deployed in real-world applications. However, recent technological developments have brought with them a slew of new challenges, from new security threats to a question of whether protocols can scale up with the demand of distributed computations on massive data. Before one can make effective use of MPC, these challenges must be addressed. In this thesis, we focus on two lines of research toward this goal: " Protocols resilient to side-channel attacks. We consider a strengthened adversarial model where, in addition to corrupting a subset of parties, the adversary may leak partial information on the secret states of honest parties during the protocol. In presence of such adversary, we first focus on preserving the correctness guarantees of MPC computations. We then proceed to address security guarantees, using cryptography. We provide two results: an MPC protocol whose security provably "degrades gracefully" with the amount of leakage information obtained by the adversary, and a second protocol which provides complete security assuming a (necessary) one-time preprocessing phase during which leakage cannot occur. * Protocols with scalable communication requirements. We devise MPC protocols with communication locality: namely, each party only needs to communicate with a small (polylog) number of dynamically chosen parties. Our techniques use digital signatures and extend particularly well to the case when the function f is a sublinear algorithm whose execution depends on o(n) of the n parties' inputs.by Elette Chantae Boyle.Ph.D
- …