1,061 research outputs found
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the
time-evolution of a quantum system have exponential size in terms of the number
of qubits present in the system. We develop a new "Singular value
transformation" algorithm capable of harnessing this exponential advantage,
that can apply polynomial transformations to the singular values of a block of
a unitary, generalizing the optimal Hamiltonian simulation results of Low and
Chuang. The proposed quantum circuits have a very simple structure, often give
rise to optimal algorithms and have appealing constant factors, while usually
only use a constant number of ancilla qubits. We show that singular value
transformation leads to novel algorithms. We give an efficient solution to a
certain "non-commutative" measurement problem and propose a new method for
singular value estimation. We also show how to exponentially improve the
complexity of implementing fractional queries to unitaries with a gapped
spectrum. Finally, as a quantum machine learning application we show how to
efficiently implement principal component regression. "Singular value
transformation" is conceptually simple and efficient, and leads to a unified
framework of quantum algorithms incorporating a variety of quantum speed-ups.
We illustrate this by showing how it generalizes a number of prominent quantum
algorithms, including: optimal Hamiltonian simulation, implementing the
Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude
amplification, robust oblivious amplitude amplification, fast QMA
amplification, fast quantum OR lemma, certain quantum walk results and several
quantum machine learning algorithms. In order to exploit the strengths of the
presented method it is useful to know its limitations too, therefore we also
prove a lower bound on the efficiency of singular value transformation, which
often gives optimal bounds.Comment: 67 pages, 1 figur
Marked Ancestor Problems (Preliminary Version)
Consider a rooted tree whose nodes can be marked or unmarked. Given a node, we want to find its nearest marked ancestor. This generalises the well-known predecessor problem, where the tree is a path. We show tight upper and lower bounds for this problem. The lower bounds are proved in the cell probe model, the upper bounds run on a unit-cost RAM. As easy corollaries we prove (often optimal) lower bounds on a number of problems. These include planar range searching, including the existential or emptiness problem, priority search trees, static tree union-find, and several problems from dynamic computational geometry, including intersection problems, proximity problems, and ray shooting. Our upper bounds improve a number of algorithms from various fields, including dynamic dictionary matching and coloured ancestor problems
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
Efficient Loop Detection in Forwarding Networks and Representing Atoms in a Field of Sets
The problem of detecting loops in a forwarding network is known to be
NP-complete when general rules such as wildcard expressions are used. Yet,
network analyzer tools such as Netplumber (Kazemian et al., NSDI'13) or
Veriflow (Khurshid et al., NSDI'13) efficiently solve this problem in networks
with thousands of forwarding rules. In this paper, we complement such
experimental validation of practical heuristics with the first provably
efficient algorithm in the context of general rules. Our main tool is a
canonical representation of the atoms (i.e. the minimal non-empty sets) of the
field of sets generated by a collection of sets. This tool is particularly
suited when the intersection of two sets can be efficiently computed and
represented. In the case of forwarding networks, each forwarding rule is
associated with the set of packet headers it matches. The atoms then correspond
to classes of headers with same behavior in the network. We propose an
algorithm for atom computation and provide the first polynomial time algorithm
for loop detection in terms of number of classes (which can be exponential in
general). This contrasts with previous methods that can be exponential, even in
simple cases with linear number of classes. Second, we introduce a notion of
network dimension captured by the overlapping degree of forwarding rules. The
values of this measure appear to be very low in practice and constant
overlapping degree ensures polynomial number of header classes. Forwarding loop
detection is thus polynomial in forwarding networks with constant overlapping
degree
Quantum Attacks on Mersenne Number Cryptosystems
Mersenne number based cryptography was introduced by Aggarwal et al. as a potential post-
quantum cryptosystem in 2017. Shortly after the publication Beunardeau et al. propose a lattice based attack significantly reducing the security margins. During the NIST post-quantum project Aggarwal et al. and Szepieniec introduced a new form of Mersenne number based cryptosystems which remain secure in the presence of the lattice reduction attack. The cryptoschemes make use of error correcting codes and have a low but non-zero probability of failure during the decoding phase. In the event of a decoding failure information about the secret key may be leaked and may allow for new attacks.
In the first part of this work, we analyze the Mersenne number cryptosystem and NIST submission Ramstake and identify approaches to exploit the information leaked by decoding failures. We describe different attacks on a weakened variant of Ramstake. Furthermore we pair the decoding failures with a timing attack on the code from the submission package. Both our attacks significantly reduce the security margins compared to the best known generic attack. However, our results on the weakened variant do not seem to carry over to the unweakened cryptosystem. It remains an open question whether the information flow from decoding failures can be exploited to break Ramstake.
In the second part of this work we analyze the Groverization of the lattice reduction attack by Beunardeau et al.. The incorporation of classical search problem into a quantum framework promises a quadratic speedup potentially reducing the security margin by half. We give an explicit description of the quantum circuits resulting from the translation of the classical attack. This description contains, to the best of our knowledge, the first in depth description and analysis of a quantum variant of the LLL algorithm. We show that the Groverized attack requires a large (but polynomial) overhead of quantum memory
Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics
An n-qubit quantum circuit performs a unitary operation on an exponentially large, 2n-dimensional, Hilbert space, which is a major source of quantum speed-ups. We develop a new “Quantum singular value transformation” algorithm that can directly harness the advantages of exponential dimensionality by applying polynomial transformations to the singular values of a block of a unitary operator. The transformations are realized by quantum circuits with a very simple structure – typically using only a constant number of ancilla qubits – leading to optimal algorithms with appealing constant factors. We show that our framework allows describing many quantum algorithms on a high level, and enables remarkably concise proofs for many prominent quantum algorithms, ranging from optimal Hamiltonian simulation to various quantum machine learning applications. We also devise a new singular vector transformation algorithm, describe how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum, and show how to efficiently implement principal component regression. Finally, we also prove a quantum lower bound on spectral transformations
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the time-evolution of a quantum system have exponential size in terms of the number of qubits present in the system. We develop a new "Singular value transformation" algorithm capable of harnessing this exponential ad
- …