23 research outputs found
Fast optimization algorithms and the cosmological constant
Denef and Douglas have observed that in certain landscape models the problem
of finding small values of the cosmological constant is a large instance of an
NP-hard problem. The number of elementary operations (quantum gates) needed to
solve this problem by brute force search exceeds the estimated computational
capacity of the observable universe. Here we describe a way out of this
puzzling circumstance: despite being NP-hard, the problem of finding a small
cosmological constant can be attacked by more sophisticated algorithms whose
performance vastly exceeds brute force search. In fact, in some parameter
regimes the average-case complexity is polynomial. We demonstrate this by
explicitly finding a cosmological constant of order in a randomly
generated -dimensional ADK landscape.Comment: 19 pages, 5 figure
Quantum weight enumerators and tensor networks
We examine the use of weight enumerators for analyzing tensor network
constructions, and specifically the quantum lego framework recently introduced.
We extend the notion of quantum weight enumerators to so-called tensor
enumerators, and prove that the trace operation on tensor networks is
compatible with a trace operation on tensor enumerators. This allows us to
compute quantum weight enumerators of larger codes such as the ones constructed
through tensor network methods more efficiently. We also provide an analogue of
the MacWilliams identity for tensor enumerators.Comment: 21 pages, 3 figures. Sets up the tensor enumerator formalis
Quantum Lattice Sieving
Lattices are very important objects in the effort to construct cryptographic
primitives that are secure against quantum attacks. A central problem in the
study of lattices is that of finding the shortest non-zero vector in the
lattice. Asymptotically, sieving is the best known technique for solving the
shortest vector problem, however, sieving requires memory exponential in the
dimension of the lattice. As a consequence, enumeration algorithms are often
used in place of sieving due to their linear memory complexity, despite their
super-exponential runtime. In this work, we present a heuristic quantum sieving
algorithm that has memory complexity polynomial in the size of the length of
the sampled vectors at the initial step of the sieve. In other words, unlike
most sieving algorithms, the memory complexity of our algorithm does not depend
on the number of sampled vectors at the initial step of the sieve.Comment: A reviewer pointed out an error in the amplitude amplification step
in the analysis of Theorem 6. While we believe this error can be resolved, we
are not sure how to do it at the moment and are taking down this submissio