14 research outputs found
Low depth algorithms for quantum amplitude estimation
We design and analyze two new low depth algorithms for amplitude estimation
(AE) achieving an optimal tradeoff between the quantum speedup and circuit
depth. For , our algorithms require oracle calls and require the oracle to be called
sequentially times to perform amplitude
estimation within additive error . These algorithms interpolate
between the classical algorithm and the standard quantum algorithm
() and achieve a tradeoff . These algorithms
bring quantum speedups for Monte Carlo methods closer to realization, as they
can provide speedups with shallower circuits.
The first algorithm (Power law AE) uses power law schedules in the framework
introduced by Suzuki et al \cite{S20}. The algorithm works for and has provable correctness guarantees when the log-likelihood function
satisfies regularity conditions required for the Bernstein Von-Mises theorem.
The second algorithm (QoPrime AE) uses the Chinese remainder theorem for
combining lower depth estimates to achieve higher accuracy. The algorithm works
for discrete where is the number of distinct coprime
moduli used by the algorithm and , and has a fully rigorous
correctness proof. We analyze both algorithms in the presence of depolarizing
noise and provide experimental comparisons with the state of the art amplitude
estimation algorithms
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
We revisit the so-called compressed oracle technique, introduced by Zhandry
for analyzing quantum algorithms in the quantum random oracle model (QROM). To
start off with, we offer a concise exposition of the technique, which easily
extends to the parallel-query QROM, where in each query-round the considered
algorithm may make several queries to the QROM in parallel. This variant of the
QROM allows for a more fine-grained query-complexity analysis.
Our main technical contribution is a framework that simplifies the use of
(the parallel-query generalization of) the compressed oracle technique for
proving query complexity results. With our framework in place, whenever
applicable, it is possible to prove quantum query complexity lower bounds by
means of purely classical reasoning. More than that, for typical examples the
crucial classical observations that give rise to the classical bounds are
sufficient to conclude the corresponding quantum bounds.
We demonstrate this on a few examples, recovering known results (like the
optimality of parallel Grover), but also obtaining new results (like the
optimality of parallel BHT collision search). Our main target is the hardness
of finding a -chain with fewer than parallel queries, i.e., a sequence
with for all .
The above problem of finding a hash chain is of fundamental importance in the
context of proofs of sequential work. Indeed, as a concrete cryptographic
application of our techniques, we prove that the "Simple Proofs of Sequential
Work" proposed by Cohen and Pietrzak remains secure against quantum attacks.
Such an analysis is not simply a matter of plugging in our new bound; the
entire protocol needs to be analyzed in the light of a quantum attack. Thanks
to our framework, this can now be done with purely classical reasoning
Quantum cryptanalysis in the RAM model: Claw-finding attacks on SIKE
We introduce models of computation that enable direct comparisons between classical and quantum algorithms. Incorporating previous work on quantum computation and error correction, we justify the use of the gate-count and depth-times-width cost metrics for quantum circuits. We demonstrate the relevance of these models to cryptanalysis by revisiting, and increasing, the security estimates for the Supersingular Isogeny Diffie--Hellman (SIDH) and Supersingular Isogeny Key Encapsulation (SIKE) schemes. Our models, analyses, and physical justifications have applications to a number of memory intensive quantum algorithms
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). This technique has proven to be very powerful for reproving known lower bound results, but also for proving new results that seemed to be out of reach before. Despite being very useful, it is however still quite cumbersome to actually employ the compressed oracle technique.
To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms.
Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds.
We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a -chain, i.e., a sequence with the property that for all , with fewer than parallel queries.
The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning
Quantum Cost Models for Cryptanalysis of Isogenies
Isogeny-based cryptography uses keys large enough to resist a far-future attack from
Tani’s algorithm, a quantum random walk on Johnson graphs. The key size is based on an
analysis in the query model. Queries do not reflect the full cost of an algorithm, and this
thesis considers other cost models. These models fit in a memory peripheral framework,
which focuses on the classical control costs of a quantum computer. Rather than queries,
we use the costs of individual gates, error correction, and latency. Primarily, these costs
make quantum memory access expensive and thus Tani’s memory-intensive algorithm is
no longer the best attack against isogeny-based cryptography. A classical algorithm due to
van Oorschot and Wiener can be faster and cheaper, depending on the model used and the
availability of time and hardware. This means that isogeny-based cryptography is more
secure than previously thought
Lower Bounds on Quantum Query and Learning Graph Complexities
In this thesis we study the power of quantum query algorithms and learning graphs; the latter essentially being very specialized quantum query algorithms themselves. We almost exclusively focus on proving lower bounds for these computational models.
First, we study lower bounds on learning graph complexity. We consider two types of learning graphs: adaptive and, more restricted, non-adaptive learning graphs. We express both adaptive and non-adaptive learning graph complexities of Boolean-valued functions (i.e., decision problems) as semidefinite minimization problems, and derive their dual problems. For various functions, we construct feasible solutions to these dual problems, thereby obtaining lower bounds on the learning graph complexity of the functions. Most notably, we prove an almost optimal Omega(n^(9/7)/sqrt(log n)) lower bound on the non-adaptive learning graph complexity of the Triangle problem. We also prove an Omega(n^(1-2^(k-2)/(2^k-1))) lower bound on the adaptive learning graph complexity of the k-Distinctness problem, which matches the complexity of the best known quantum query algorithm for this problem.
Second, we construct optimal adversary lower bounds for various decision problems. Our main procedure for constructing them is to embed the adversary matrix into a larger matrix whose properties are easier to analyze. This embedding procedure imposes certain requirements on the size of the input alphabet. We prove optimal Omega(n^(1/3)) adversary lower bounds for the Collision and Set Equality problems, provided that the alphabet size is at least Omega(n^2). An optimal lower bound for Collision was previously proven using the polynomial method, while our lower bound for Set Equality is new. (An optimal lower bound for Set Equality was also independently and at about the same time proven by Zhandry using the polynomial method [arXiv, 2013].)
We compare the power of non-adaptive learning graphs and quantum query algorithms that only utilize the knowledge on the possible positions of certificates in the input string. To do that, we introduce a notion of a certificate structure of a decision problem. Using the adversary method and the dual formulation of the learning graph complexity, we show that, for every certificate structure, there exists a decision problem possessing this certificate structure such that its non-adaptive learning graph and quantum query complexities differ by at most a constant multiplicative factor. For a special case of certificate structures, we construct a relatively general class of problems having this property. This construction generalizes the adversary lower bound for the k-Sum problem derived recently by Belovs and Spalek [ACM ITCS, 2013].
We also construct an optimal Omega(n^(2/3)) adversary lower bound for the Element Distinctness problem with minimal non-trivial alphabet size, which equals the length of the input. Due to the strict requirement on the alphabet size, here we cannot use the embedding procedure, and the construction of the adversary matrix heavily relies on the representation theory of the symmetric group. While an optimal lower bound for Element Distinctness using the polynomial method had been proven for any input alphabet, an optimal adversary construction was previously only known for alphabets of size at least Omega(n^2).
Finally, we introduce the Enhanced Find-Two problem and we study its query complexity. The Enhanced Find-Two problem is, given n elements such that exactly k of them are marked, find two distinct marked elements using the following resources:
(1) one initial copy of the uniform superposition over all marked elements,
(2) an oracle that reflects across this superposition, and
(3) an oracle that tests if an element is marked.
This relational problem arises in the study of quantum proofs of knowledge. We prove that its query complexity is Theta(min{sqrt(n/k),sqrt(k)})
Optimal Parallel Quantum Query Algorithms
We study the complexity of quantum query algorithms that make p queries in parallel in each timestep. We show tight bounds for a number of problems, specifically Θ((n/p)2/3) p-parallel queries for element distinctness and Θ((n/p)k/(k + 1)) for k-sum. Our upper bounds are obtained by parallelized quantum walk algorithms, and our lower bounds are based on a relatively small modification of the adversary lower bound method, combined with recent results of Belovs et al. on learning graphs. We also prove some general bounds, in particular that quantum and classical p-parallel complexity are polynomially related for all total functions f when p is small compared to f’s block sensitivity