1,541 research outputs found
The Power of Quantum Fourier Sampling
A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and
Shepherd, shows that quantum computers can efficiently sample from probability
distributions that cannot be exactly sampled efficiently on a classical
computer, unless the PH collapses. Aaronson and Arkhipov take this further by
considering a distribution that can be sampled efficiently by linear optical
quantum computation, that under two feasible conjectures, cannot even be
approximately sampled classically within bounded total variation distance,
unless the PH collapses.
In this work we use Quantum Fourier Sampling to construct a class of
distributions that can be sampled by a quantum computer. We then argue that
these distributions cannot be approximately sampled classically, unless the PH
collapses, under variants of the Aaronson and Arkhipov conjectures.
In particular, we show a general class of quantumly sampleable distributions
each of which is based on an "Efficiently Specifiable" polynomial, for which a
classical approximate sampler implies an average-case approximation. This class
of polynomials contains the Permanent but also includes, for example, the
Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials.
Although our construction, unlike that proposed by Aaronson and Arkhipov,
likely requires a universal quantum computer, we are able to use this
additional power to weaken the conjectures needed to prove approximate sampling
hardness results
A group-theoretic approach to fast matrix multiplication
We develop a new, group-theoretic approach to bounding the exponent of matrix
multiplication. There are two components to this approach: (1) identifying
groups G that admit a certain type of embedding of matrix multiplication into
the group algebra C[G], and (2) controlling the dimensions of the irreducible
representations of such groups. We present machinery and examples to support
(1), including a proof that certain families of groups of order n^(2 + o(1))
support n-by-n matrix multiplication, a necessary condition for the approach to
yield exponent 2. Although we cannot yet completely achieve both (1) and (2),
we hope that it may be possible, and we suggest potential routes to that result
using the constructions in this paper.Comment: 12 pages, 1 figure, only updates from previous version are page
numbers and copyright informatio
Pseudorandom generators and the BQP vs. PH problem
It is a longstanding open problem to devise an oracle relative to which BQP
does not lie in the Polynomial-Time Hierarchy (PH). We advance a natural
conjecture about the capacity of the Nisan-Wigderson pseudorandom generator
[NW94] to fool AC_0, with MAJORITY as its hard function. Our conjecture is
essentially that the loss due to the hybrid argument (which is a component of
the standard proof from [NW94]) can be avoided in this setting. This is a
question that has been asked previously in the pseudorandomness literature
[BSW03]. We then make three main contributions: (1) We show that our conjecture
implies the existence of an oracle relative to which BQP is not in the PH. This
entails giving an explicit construction of unitary matrices, realizable by
small quantum circuits, whose row-supports are "nearly-disjoint." (2) We give a
simple framework (generalizing the setting of Aaronson [A10]) in which any
efficiently quantumly computable unitary gives rise to a distribution that can
be distinguished from the uniform distribution by an efficient quantum
algorithm. When applied to the unitaries we construct, this framework yields a
problem that can be solved quantumly, and which forms the basis for the desired
oracle. (3) We prove that Aaronson's "GLN conjecture" [A10] implies our
conjecture; our conjecture is thus formally easier to prove. The GLN conjecture
was recently proved false for depth greater than 2 [A10a], but it remains open
for depth 2. If true, the depth-2 version of either conjecture would imply an
oracle relative to which BQP is not in AM, which is itself an outstanding open
problem. Taken together, our results have the following interesting
interpretation: they give an instantiation of the Nisan-Wigderson generator
that can be broken by quantum computers, but not by the relevant modes of
classical computation, if our conjecture is true.Comment: Updated in light of counterexample to the GLN conjectur
The Complexity of Rationalizing Network Formation
We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of "snapshots" of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [14]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone's valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is the best possible assuming P ≠ NP To do this we prove a tight (1/2 + δ) -approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LlN_(R_+) (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [10] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LIN_(R_+) may be of independent interest, given the generality of these problem
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
A new algorithm for fast generalized DFTs
We give an new arithmetic algorithm to compute the generalized Discrete
Fourier Transform (DFT) over finite groups . The new algorithm uses
operations to compute the generalized DFT over
finite groups of Lie type, including the linear, orthogonal, and symplectic
families and their variants, as well as all finite simple groups of Lie type.
Here is the exponent of matrix multiplication, so the exponent
is optimal if . Previously, "exponent one" algorithms
were known for supersolvable groups and the symmetric and alternating groups.
No exponent one algorithms were known (even under the assumption )
for families of linear groups of fixed dimension, and indeed the previous
best-known algorithm for had exponent despite being the focus
of significant effort. We unconditionally achieve exponent at most for
this group, and exponent one if . Our algorithm also yields an
improved exponent for computing the generalized DFT over general finite groups
, which beats the longstanding previous best upper bound, for any .
In particular, assuming , we achieve exponent , while the
previous best was
Algebraic Problems Equivalent to Beating Exponent 3/2 for Polynomial Factorization over Finite Fields
The fastest known algorithm for factoring univariate polynomials over finite
fields is the Kedlaya-Umans (fast modular composition) implementation of the
Kaltofen-Shoup algorithm. It is randomized and takes time to factor polynomials of degree over the finite field
with elements. A significant open problem is if the
exponent can be improved. We study a collection of algebraic problems and
establish a web of reductions between them. A consequence is that an algorithm
for any one of these problems with exponent better than would yield an
algorithm for polynomial factorization with exponent better than
Group-theoretic algorithms for matrix multiplication
We further develop the group-theoretic approach to fast matrix multiplication
introduced by Cohn and Umans, and for the first time use it to derive
algorithms asymptotically faster than the standard algorithm. We describe
several families of wreath product groups that achieve matrix multiplication
exponent less than 3, the asymptotically fastest of which achieves exponent
2.41. We present two conjectures regarding specific improvements, one
combinatorial and the other algebraic. Either one would imply that the exponent
of matrix multiplication is 2.Comment: 10 page
- …